Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
s920243400
PaddleDetection
提交
3aa67981
P
PaddleDetection
项目概览
s920243400
/
PaddleDetection
与 Fork 源项目一致
Fork自
PaddlePaddle / PaddleDetection
通知
2
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleDetection
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
3aa67981
编写于
7月 10, 2017
作者:
W
wanghaoshuang
浏览文件
操作
浏览文件
下载
差异文件
Merge branch 'develop' of
https://github.com/PaddlePaddle/Paddle
into pixel_softmax_layer
上级
29f25fbe
6398c15c
变更
59
隐藏空白更改
内联
并排
Showing
59 changed file
with
1045 addition
and
198 deletion
+1045
-198
.pre-commit-config.yaml
.pre-commit-config.yaml
+7
-0
.travis.yml
.travis.yml
+3
-2
CMakeLists.txt
CMakeLists.txt
+1
-0
cmake/external/glog.cmake
cmake/external/glog.cmake
+2
-0
cmake/external/protobuf.cmake
cmake/external/protobuf.cmake
+59
-0
cmake/generic.cmake
cmake/generic.cmake
+46
-15
doc_theme/templates/layout.html
doc_theme/templates/layout.html
+1
-1
go/cmd/master/CMakeLists.txt
go/cmd/master/CMakeLists.txt
+1
-1
go/cmd/pserver/CMakeLists.txt
go/cmd/pserver/CMakeLists.txt
+1
-1
go/master/c/client.go
go/master/c/client.go
+12
-1
go/master/client.go
go/master/client.go
+12
-5
go/master/client_test.go
go/master/client_test.go
+8
-3
go/pserver/client/c/CMakeLists.txt
go/pserver/client/c/CMakeLists.txt
+4
-1
go/pserver/client/c/test/CMakeLists.txt
go/pserver/client/c/test/CMakeLists.txt
+1
-1
go/pserver/optimizer.go
go/pserver/optimizer.go
+3
-3
go/pserver/service.go
go/pserver/service.go
+4
-2
paddle/api/CMakeLists.txt
paddle/api/CMakeLists.txt
+1
-0
paddle/framework/CMakeLists.txt
paddle/framework/CMakeLists.txt
+5
-1
paddle/framework/attr_checker.h
paddle/framework/attr_checker.h
+119
-0
paddle/framework/op_registry.h
paddle/framework/op_registry.h
+253
-0
paddle/framework/op_registry_test.cc
paddle/framework/op_registry_test.cc
+122
-0
paddle/gserver/layers/AverageLayer.h
paddle/gserver/layers/AverageLayer.h
+4
-0
paddle/gserver/layers/CrossChannelNormLayer.cpp
paddle/gserver/layers/CrossChannelNormLayer.cpp
+26
-11
paddle/gserver/layers/MaxLayer.h
paddle/gserver/layers/MaxLayer.h
+4
-0
paddle/gserver/layers/NormLayer.cpp
paddle/gserver/layers/NormLayer.cpp
+0
-10
paddle/gserver/layers/SequenceLastInstanceLayer.cpp
paddle/gserver/layers/SequenceLastInstanceLayer.cpp
+4
-6
paddle/gserver/layers/SequencePoolLayer.cpp
paddle/gserver/layers/SequencePoolLayer.cpp
+2
-3
paddle/gserver/layers/SequencePoolLayer.h
paddle/gserver/layers/SequencePoolLayer.h
+3
-4
paddle/gserver/tests/LayerGradUtil.cpp
paddle/gserver/tests/LayerGradUtil.cpp
+4
-2
paddle/gserver/tests/LayerGradUtil.h
paddle/gserver/tests/LayerGradUtil.h
+4
-0
paddle/gserver/tests/test_LayerGrad.cpp
paddle/gserver/tests/test_LayerGrad.cpp
+13
-3
paddle/optimizer/adadelta_optimizer.cc
paddle/optimizer/adadelta_optimizer.cc
+5
-3
paddle/optimizer/adagrad_optimizer.cc
paddle/optimizer/adagrad_optimizer.cc
+6
-3
paddle/optimizer/adam_optimizer.cc
paddle/optimizer/adam_optimizer.cc
+6
-3
paddle/optimizer/lr_policy.h
paddle/optimizer/lr_policy.h
+34
-14
paddle/optimizer/sgd_optimizer.cc
paddle/optimizer/sgd_optimizer.cc
+5
-1
paddle/parameter/Argument.cpp
paddle/parameter/Argument.cpp
+3
-3
paddle/parameter/Argument.h
paddle/parameter/Argument.h
+1
-1
paddle/parameter/tests/test_argument.cpp
paddle/parameter/tests/test_argument.cpp
+2
-2
paddle/pserver/LightNetwork.cpp
paddle/pserver/LightNetwork.cpp
+14
-14
paddle/pserver/SocketChannel.cpp
paddle/pserver/SocketChannel.cpp
+11
-11
paddle/pserver/test/SocketTest.cpp
paddle/pserver/test/SocketTest.cpp
+14
-14
paddle/scripts/docker/build.sh
paddle/scripts/docker/build.sh
+1
-1
paddle/scripts/travis/build_doc.sh
paddle/scripts/travis/build_doc.sh
+6
-5
paddle/trainer/Tester.cpp
paddle/trainer/Tester.cpp
+1
-1
paddle/utils/ThreadLocal.h
paddle/utils/ThreadLocal.h
+6
-6
proto/OptimizerConfig.proto
proto/OptimizerConfig.proto
+12
-15
python/CMakeLists.txt
python/CMakeLists.txt
+2
-1
python/paddle/trainer/config_parser.py
python/paddle/trainer/config_parser.py
+14
-11
python/paddle/trainer_config_helpers/layers.py
python/paddle/trainer_config_helpers/layers.py
+25
-2
python/paddle/trainer_config_helpers/tests/configs/protostr/test_sequence_pooling.protostr
...ers/tests/configs/protostr/test_sequence_pooling.protostr
+51
-0
python/paddle/trainer_config_helpers/tests/configs/test_sequence_pooling.py
...ner_config_helpers/tests/configs/test_sequence_pooling.py
+8
-0
python/paddle/v2/framework/__init__.py
python/paddle/v2/framework/__init__.py
+1
-0
python/paddle/v2/framework/tests/CMakeLists.txt
python/paddle/v2/framework/tests/CMakeLists.txt
+1
-0
python/paddle/v2/framework/tests/test_protobuf.py
python/paddle/v2/framework/tests/test_protobuf.py
+26
-0
python/paddle/v2/master/client.py
python/paddle/v2/master/client.py
+10
-2
python/paddle/v2/reader/creator.py
python/paddle/v2/reader/creator.py
+42
-5
python/paddle/v2/reader/tests/creator_test.py
python/paddle/v2/reader/tests/creator_test.py
+1
-1
python/setup.py.in
python/setup.py.in
+8
-3
未找到文件。
.pre-commit-config.yaml
浏览文件 @
3aa67981
...
@@ -21,3 +21,10 @@
...
@@ -21,3 +21,10 @@
sha
:
28c0ea8a67a3e2dbbf4822ef44e85b63a0080a29
sha
:
28c0ea8a67a3e2dbbf4822ef44e85b63a0080a29
hooks
:
hooks
:
-
id
:
clang-formater
-
id
:
clang-formater
-
repo
:
https://github.com/dnephin/pre-commit-golang
sha
:
e4693a4c282b4fc878eda172a929f7a6508e7d16
hooks
:
-
id
:
go-fmt
files
:
(.*\.go)
-
id
:
go-lint
files
:
(.*\.go)
.travis.yml
浏览文件 @
3aa67981
...
@@ -33,16 +33,17 @@ addons:
...
@@ -33,16 +33,17 @@ addons:
-
ccache
-
ccache
before_install
:
before_install
:
-
if [[ "$JOB" == "check_style" ]]; then sudo ln -s /usr/bin/clang-format-3.8 /usr/bin/clang-format; fi
-
if [[ "$JOB" == "check_style" ]]; then sudo ln -s /usr/bin/clang-format-3.8 /usr/bin/clang-format; fi
# Paddle is using protobuf 3.1 currently. Protobuf 3.2 breaks the compatibility. So we specify the python
# Paddle is using protobuf 3.1 currently. Protobuf 3.2 breaks the compatibility. So we specify the python
# protobuf version.
# protobuf version.
-
pip install numpy wheel 'protobuf==3.1' sphinx==1.5.6 recommonmark sphinx-rtd-theme==0.1.9 virtualenv pre-commit requests==2.9.2 LinkChecker
-
pip install numpy wheel 'protobuf==3.1' sphinx==1.5.6 recommonmark sphinx-rtd-theme==0.1.9 virtualenv pre-commit requests==2.9.2 LinkChecker
-
pip install rarfile
-
pip install rarfile
-
curl https://glide.sh/get | bash
-
eval "$(GIMME_GO_VERSION=1.8.3 gimme)"
-
eval "$(GIMME_GO_VERSION=1.8.3 gimme)"
-
|
-
|
function timeout() { perl -e 'alarm shift; exec @ARGV' "$@"; }
function timeout() { perl -e 'alarm shift; exec @ARGV' "$@"; }
script
:
script
:
-
|
-
|
export WITH_GOLANG=ON &&
timeout 2580 paddle/scripts/travis/${JOB}.sh # 43min timeout
timeout 2580 paddle/scripts/travis/${JOB}.sh # 43min timeout
RESULT=$?; if [ $RESULT -eq 0 ] || [ $RESULT -eq 142 ]; then true; else false; fi;
RESULT=$?; if [ $RESULT -eq 0 ] || [ $RESULT -eq 142 ]; then true; else false; fi;
notifications
:
notifications
:
email
:
email
:
...
...
CMakeLists.txt
浏览文件 @
3aa67981
...
@@ -16,6 +16,7 @@ cmake_minimum_required(VERSION 3.0)
...
@@ -16,6 +16,7 @@ cmake_minimum_required(VERSION 3.0)
set
(
CMAKE_MODULE_PATH
${
CMAKE_MODULE_PATH
}
"
${
CMAKE_CURRENT_SOURCE_DIR
}
/cmake"
)
set
(
CMAKE_MODULE_PATH
${
CMAKE_MODULE_PATH
}
"
${
CMAKE_CURRENT_SOURCE_DIR
}
/cmake"
)
set
(
PROJ_ROOT
${
CMAKE_CURRENT_SOURCE_DIR
}
)
set
(
PROJ_ROOT
${
CMAKE_CURRENT_SOURCE_DIR
}
)
set
(
PROJ_BINARY_ROOT
${
CMAKE_CURRENT_BINARY_DIR
}
)
include
(
system
)
include
(
system
)
...
...
cmake/external/glog.cmake
浏览文件 @
3aa67981
...
@@ -38,12 +38,14 @@ ExternalProject_Add(
...
@@ -38,12 +38,14 @@ ExternalProject_Add(
CMAKE_ARGS -DCMAKE_CXX_FLAGS=
${
CMAKE_CXX_FLAGS
}
CMAKE_ARGS -DCMAKE_CXX_FLAGS=
${
CMAKE_CXX_FLAGS
}
CMAKE_ARGS -DCMAKE_C_FLAGS=
${
CMAKE_C_FLAGS
}
CMAKE_ARGS -DCMAKE_C_FLAGS=
${
CMAKE_C_FLAGS
}
CMAKE_ARGS -DCMAKE_INSTALL_PREFIX=
${
GLOG_INSTALL_DIR
}
CMAKE_ARGS -DCMAKE_INSTALL_PREFIX=
${
GLOG_INSTALL_DIR
}
CMAKE_ARGS -DCMAKE_INSTALL_LIBDIR=
${
GLOG_INSTALL_DIR
}
/lib
CMAKE_ARGS -DCMAKE_POSITION_INDEPENDENT_CODE=ON
CMAKE_ARGS -DCMAKE_POSITION_INDEPENDENT_CODE=ON
CMAKE_ARGS -DWITH_GFLAGS=ON
CMAKE_ARGS -DWITH_GFLAGS=ON
CMAKE_ARGS -Dgflags_DIR=
${
GFLAGS_INSTALL_DIR
}
/lib/cmake/gflags
CMAKE_ARGS -Dgflags_DIR=
${
GFLAGS_INSTALL_DIR
}
/lib/cmake/gflags
CMAKE_ARGS -DBUILD_TESTING=OFF
CMAKE_ARGS -DBUILD_TESTING=OFF
CMAKE_ARGS -DCMAKE_BUILD_TYPE=Release
CMAKE_ARGS -DCMAKE_BUILD_TYPE=Release
CMAKE_CACHE_ARGS -DCMAKE_INSTALL_PREFIX:PATH=
${
GLOG_INSTALL_DIR
}
CMAKE_CACHE_ARGS -DCMAKE_INSTALL_PREFIX:PATH=
${
GLOG_INSTALL_DIR
}
-DCMAKE_INSTALL_LIBDIR:PATH=
${
GLOG_INSTALL_DIR
}
/lib
-DCMAKE_POSITION_INDEPENDENT_CODE:BOOL=ON
-DCMAKE_POSITION_INDEPENDENT_CODE:BOOL=ON
-DCMAKE_BUILD_TYPE:STRING=Release
-DCMAKE_BUILD_TYPE:STRING=Release
)
)
...
...
cmake/external/protobuf.cmake
浏览文件 @
3aa67981
...
@@ -17,6 +17,65 @@ INCLUDE(ExternalProject)
...
@@ -17,6 +17,65 @@ INCLUDE(ExternalProject)
FIND_PACKAGE
(
Protobuf QUIET
)
FIND_PACKAGE
(
Protobuf QUIET
)
SET
(
PROTOBUF_FOUND
"OFF"
)
SET
(
PROTOBUF_FOUND
"OFF"
)
if
(
NOT COMMAND protobuf_generate_python
)
# before cmake 3.4, protobuf_genrerate_python is not defined.
function
(
protobuf_generate_python SRCS
)
# shameless copy from https://github.com/Kitware/CMake/blob/master/Modules/FindProtobuf.cmake
if
(
NOT ARGN
)
message
(
SEND_ERROR
"Error: PROTOBUF_GENERATE_PYTHON() called without any proto files"
)
return
()
endif
()
if
(
PROTOBUF_GENERATE_CPP_APPEND_PATH
)
# Create an include path for each file specified
foreach
(
FIL
${
ARGN
}
)
get_filename_component
(
ABS_FIL
${
FIL
}
ABSOLUTE
)
get_filename_component
(
ABS_PATH
${
ABS_FIL
}
PATH
)
list
(
FIND _protobuf_include_path
${
ABS_PATH
}
_contains_already
)
if
(
${
_contains_already
}
EQUAL -1
)
list
(
APPEND _protobuf_include_path -I
${
ABS_PATH
}
)
endif
()
endforeach
()
else
()
set
(
_protobuf_include_path -I
${
CMAKE_CURRENT_SOURCE_DIR
}
)
endif
()
if
(
DEFINED PROTOBUF_IMPORT_DIRS AND NOT DEFINED Protobuf_IMPORT_DIRS
)
set
(
Protobuf_IMPORT_DIRS
"
${
PROTOBUF_IMPORT_DIRS
}
"
)
endif
()
if
(
DEFINED Protobuf_IMPORT_DIRS
)
foreach
(
DIR
${
Protobuf_IMPORT_DIRS
}
)
get_filename_component
(
ABS_PATH
${
DIR
}
ABSOLUTE
)
list
(
FIND _protobuf_include_path
${
ABS_PATH
}
_contains_already
)
if
(
${
_contains_already
}
EQUAL -1
)
list
(
APPEND _protobuf_include_path -I
${
ABS_PATH
}
)
endif
()
endforeach
()
endif
()
set
(
${
SRCS
}
)
foreach
(
FIL
${
ARGN
}
)
get_filename_component
(
ABS_FIL
${
FIL
}
ABSOLUTE
)
get_filename_component
(
FIL_WE
${
FIL
}
NAME_WE
)
if
(
NOT PROTOBUF_GENERATE_CPP_APPEND_PATH
)
get_filename_component
(
FIL_DIR
${
FIL
}
DIRECTORY
)
if
(
FIL_DIR
)
set
(
FIL_WE
"
${
FIL_DIR
}
/
${
FIL_WE
}
"
)
endif
()
endif
()
list
(
APPEND
${
SRCS
}
"
${
CMAKE_CURRENT_BINARY_DIR
}
/
${
FIL_WE
}
_pb2.py"
)
add_custom_command
(
OUTPUT
"
${
CMAKE_CURRENT_BINARY_DIR
}
/
${
FIL_WE
}
_pb2.py"
COMMAND
${
Protobuf_PROTOC_EXECUTABLE
}
--python_out
${
CMAKE_CURRENT_BINARY_DIR
}
${
_protobuf_include_path
}
${
ABS_FIL
}
DEPENDS
${
ABS_FIL
}
${
Protobuf_PROTOC_EXECUTABLE
}
COMMENT
"Running Python protocol buffer compiler on
${
FIL
}
"
VERBATIM
)
endforeach
()
set
(
${
SRCS
}
${${
SRCS
}}
PARENT_SCOPE
)
endfunction
()
endif
()
# Print and set the protobuf library information,
# Print and set the protobuf library information,
# finish this cmake process and exit from this file.
# finish this cmake process and exit from this file.
...
...
cmake/generic.cmake
浏览文件 @
3aa67981
...
@@ -88,7 +88,7 @@
...
@@ -88,7 +88,7 @@
#
#
# including binary directory for generated headers.
# including binary directory for generated headers.
include_directories
(
${
CMAKE_BINARY_DIR
}
)
include_directories
(
${
CMAKE_
CURRENT_
BINARY_DIR
}
)
if
(
NOT APPLE
)
if
(
NOT APPLE
)
find_package
(
Threads REQUIRED
)
find_package
(
Threads REQUIRED
)
...
@@ -99,25 +99,44 @@ function(merge_static_libs TARGET_NAME)
...
@@ -99,25 +99,44 @@ function(merge_static_libs TARGET_NAME)
set
(
libs
${
ARGN
}
)
set
(
libs
${
ARGN
}
)
list
(
REMOVE_DUPLICATES libs
)
list
(
REMOVE_DUPLICATES libs
)
#
First get the file names of the libraries to be merged
#
Get all propagation dependencies from the merged libraries
foreach
(
lib
${
libs
}
)
foreach
(
lib
${
libs
}
)
set
(
libfiles
${
libfiles
}
$<TARGET_FILE:
${
lib
}
>
)
list
(
APPEND libs_deps
${${
lib
}
_LIB_DEPENDS
}
)
endforeach
()
endforeach
()
if
(
APPLE
)
# Use OSX's libtool to merge archives
if
(
APPLE
)
# Use OSX's libtool to merge archives
# To produce a library we need at least one source file.
# It is created by add_custom_command below and will helps
# also help to track dependencies.
set
(
dummyfile
${
CMAKE_CURRENT_BINARY_DIR
}
/
${
TARGET_NAME
}
_dummy.c
)
set
(
dummyfile
${
CMAKE_CURRENT_BINARY_DIR
}
/
${
TARGET_NAME
}
_dummy.c
)
# Make the generated dummy source file depended on all static input
# libs. If input lib changes,the source file is touched
# which causes the desired effect (relink).
add_custom_command
(
OUTPUT
${
dummyfile
}
COMMAND
${
CMAKE_COMMAND
}
-E touch
${
dummyfile
}
DEPENDS
${
libs
}
)
# Generate dummy staic lib
file
(
WRITE
${
dummyfile
}
"const char * dummy =
\"
${
dummyfile
}
\"
;"
)
file
(
WRITE
${
dummyfile
}
"const char * dummy =
\"
${
dummyfile
}
\"
;"
)
add_library
(
${
TARGET_NAME
}
STATIC
${
dummyfile
}
)
add_library
(
${
TARGET_NAME
}
STATIC
${
dummyfile
}
)
target_link_libraries
(
${
TARGET_NAME
}
${
libs_deps
}
)
foreach
(
lib
${
libs
}
)
# Get the file names of the libraries to be merged
set
(
libfiles
${
libfiles
}
$<TARGET_FILE:
${
lib
}
>
)
endforeach
()
add_custom_command
(
TARGET
${
TARGET_NAME
}
POST_BUILD
add_custom_command
(
TARGET
${
TARGET_NAME
}
POST_BUILD
COMMAND rm
"
${
CMAKE_CURRENT_BINARY_DIR
}
/lib
${
TARGET_NAME
}
.a"
COMMAND rm
"
${
CMAKE_CURRENT_BINARY_DIR
}
/lib
${
TARGET_NAME
}
.a"
COMMAND /usr/bin/libtool -static -o
"
${
CMAKE_CURRENT_BINARY_DIR
}
/lib
${
TARGET_NAME
}
.a"
${
libfiles
}
)
COMMAND /usr/bin/libtool -static -o
"
${
CMAKE_CURRENT_BINARY_DIR
}
/lib
${
TARGET_NAME
}
.a"
${
libfiles
}
)
else
()
# general UNIX: use "ar" to extract objects and re-add to a common lib
else
()
# general UNIX: use "ar" to extract objects and re-add to a common lib
foreach
(
lib
${
libs
}
)
foreach
(
lib
${
libs
}
)
set
(
objlistfile
${
lib
}
.objlist
)
# list of objects in the input library
set
(
objlistfile
${
lib
}
.objlist
)
# list of objects in the input library
set
(
objdir
${
lib
}
.objdir
)
set
(
objdir
${
lib
}
.objdir
)
add_custom_command
(
OUTPUT
${
objdir
}
add_custom_command
(
OUTPUT
${
objdir
}
COMMAND
${
CMAKE_COMMAND
}
-E make_directory
${
objdir
}
)
COMMAND
${
CMAKE_COMMAND
}
-E make_directory
${
objdir
}
DEPENDS
${
lib
}
)
add_custom_command
(
OUTPUT
${
objlistfile
}
add_custom_command
(
OUTPUT
${
objlistfile
}
COMMAND
${
CMAKE_AR
}
-x
"$<TARGET_FILE:
${
lib
}
>"
COMMAND
${
CMAKE_AR
}
-x
"$<TARGET_FILE:
${
lib
}
>"
...
@@ -134,18 +153,18 @@ function(merge_static_libs TARGET_NAME)
...
@@ -134,18 +153,18 @@ function(merge_static_libs TARGET_NAME)
list
(
APPEND mergebases
"
${
mergebase
}
"
)
list
(
APPEND mergebases
"
${
mergebase
}
"
)
endforeach
()
endforeach
()
# We need a target for the output merged library
add_library
(
${
TARGET_NAME
}
STATIC
${
mergebases
}
)
add_library
(
${
TARGET_NAME
}
STATIC
${
mergebases
}
)
target_link_libraries
(
${
TARGET_NAME
}
${
libs_deps
}
)
# Get the file name of the generated library
set
(
outlibfile
"$<TARGET_FILE:
${
TARGET_NAME
}
>"
)
set
(
outlibfile
"$<TARGET_FILE:
${
TARGET_NAME
}
>"
)
foreach
(
lib
${
libs
}
)
foreach
(
lib
${
libs
}
)
add_custom_command
(
TARGET
${
TARGET_NAME
}
POST_BUILD
add_custom_command
(
TARGET
${
TARGET_NAME
}
POST_BUILD
COMMAND
${
CMAKE_AR
}
ru
${
outlibfile
}
@
"../
${
lib
}
.objlist"
COMMAND
${
CMAKE_AR
}
cr
${
outlibfile
}
*.o
WORKING_DIRECTORY
${
lib
}
.objdir
)
COMMAND
${
CMAKE_RANLIB
}
${
outlibfile
}
WORKING_DIRECTORY
${
lib
}
.objdir
)
endforeach
()
endforeach
()
add_custom_command
(
TARGET
${
TARGET_NAME
}
POST_BUILD
COMMAND
${
CMAKE_RANLIB
}
${
outlibfile
}
)
endif
()
endif
()
endfunction
(
merge_static_libs
)
endfunction
(
merge_static_libs
)
...
@@ -194,7 +213,7 @@ function(cc_test TARGET_NAME)
...
@@ -194,7 +213,7 @@ function(cc_test TARGET_NAME)
add_executable
(
${
TARGET_NAME
}
${
cc_test_SRCS
}
)
add_executable
(
${
TARGET_NAME
}
${
cc_test_SRCS
}
)
target_link_libraries
(
${
TARGET_NAME
}
${
cc_test_DEPS
}
gtest gtest_main
)
target_link_libraries
(
${
TARGET_NAME
}
${
cc_test_DEPS
}
gtest gtest_main
)
add_dependencies
(
${
TARGET_NAME
}
${
cc_test_DEPS
}
gtest gtest_main
)
add_dependencies
(
${
TARGET_NAME
}
${
cc_test_DEPS
}
gtest gtest_main
)
add_test
(
${
TARGET_NAME
}
${
TARGET_NAME
}
)
add_test
(
NAME
${
TARGET_NAME
}
COMMAND
${
TARGET_NAME
}
WORKING_DIRECTORY
${
CMAKE_CURRENT_SOURCE_DIR
}
)
endif
()
endif
()
endfunction
(
cc_test
)
endfunction
(
cc_test
)
...
@@ -281,10 +300,11 @@ function(go_library TARGET_NAME)
...
@@ -281,10 +300,11 @@ function(go_library TARGET_NAME)
file
(
GLOB GO_SOURCE RELATIVE
"
${
CMAKE_CURRENT_SOURCE_DIR
}
"
"*.go"
)
file
(
GLOB GO_SOURCE RELATIVE
"
${
CMAKE_CURRENT_SOURCE_DIR
}
"
"*.go"
)
string
(
REPLACE
"
${
PADDLE_GO_PATH
}
/"
""
CMAKE_CURRENT_SOURCE_REL_DIR
${
CMAKE_CURRENT_SOURCE_DIR
}
)
string
(
REPLACE
"
${
PADDLE_GO_PATH
}
/"
""
CMAKE_CURRENT_SOURCE_REL_DIR
${
CMAKE_CURRENT_SOURCE_DIR
}
)
# FIXME: link path
add_custom_command
(
TARGET
${
TARGET_NAME
}
POST_BUILD
add_custom_command
(
TARGET
${
TARGET_NAME
}
POST_BUILD
COMMAND rm
"
${${
TARGET_NAME
}
_LIB_PATH
}
"
COMMAND rm
"
${${
TARGET_NAME
}
_LIB_PATH
}
"
# Golang build source code
# Golang build source code
COMMAND
env
GOPATH=
${
GOPATH
}
${
CMAKE_Go_COMPILER
}
build
${
BUILD_MODE
}
COMMAND GOPATH=
${
GOPATH
}
${
CMAKE_Go_COMPILER
}
build
${
BUILD_MODE
}
-o
"
${${
TARGET_NAME
}
_LIB_PATH
}
"
-o
"
${${
TARGET_NAME
}
_LIB_PATH
}
"
"./
${
CMAKE_CURRENT_SOURCE_REL_DIR
}
/
${
GO_SOURCE
}
"
"./
${
CMAKE_CURRENT_SOURCE_REL_DIR
}
/
${
GO_SOURCE
}
"
# must run under GOPATH
# must run under GOPATH
...
@@ -299,11 +319,13 @@ function(go_binary TARGET_NAME)
...
@@ -299,11 +319,13 @@ function(go_binary TARGET_NAME)
cmake_parse_arguments
(
go_binary
"
${
options
}
"
"
${
oneValueArgs
}
"
"
${
multiValueArgs
}
"
${
ARGN
}
)
cmake_parse_arguments
(
go_binary
"
${
options
}
"
"
${
oneValueArgs
}
"
"
${
multiValueArgs
}
"
${
ARGN
}
)
string
(
REPLACE
"
${
PADDLE_GO_PATH
}
/"
""
CMAKE_CURRENT_SOURCE_REL_DIR
${
CMAKE_CURRENT_SOURCE_DIR
}
)
string
(
REPLACE
"
${
PADDLE_GO_PATH
}
/"
""
CMAKE_CURRENT_SOURCE_REL_DIR
${
CMAKE_CURRENT_SOURCE_DIR
}
)
# FIXME: link path
add_custom_command
(
OUTPUT
${
TARGET_NAME
}
_timestamp
add_custom_command
(
OUTPUT
${
TARGET_NAME
}
_timestamp
COMMAND env GOPATH=
${
GOPATH
}
${
CMAKE_Go_COMPILER
}
build
COMMAND env LIBRARY_PATH=
${
CMAKE_BINARY_DIR
}
/go/pserver/client/c/:$ENV{LIBRARY_PATH}
GOPATH=
${
GOPATH
}
${
CMAKE_Go_COMPILER
}
build
-o
"
${
CMAKE_CURRENT_BINARY_DIR
}
/
${
TARGET_NAME
}
"
-o
"
${
CMAKE_CURRENT_BINARY_DIR
}
/
${
TARGET_NAME
}
"
"./
${
CMAKE_CURRENT_SOURCE_REL_DIR
}
/
${
go_binary_SRCS
}
"
"./
${
CMAKE_CURRENT_SOURCE_REL_DIR
}
/
${
go_binary_SRCS
}
"
WORKING_DIRECTORY
"
${
PADDLE_IN_GOPATH
}
/go"
)
WORKING_DIRECTORY
"
${
PADDLE_IN_GOPATH
}
/go"
)
# TODO: don't know what ${TARGET_NAME}_link does
# TODO: don't know what ${TARGET_NAME}_link does
add_custom_target
(
${
TARGET_NAME
}
ALL DEPENDS go_vendor
${
TARGET_NAME
}
_timestamp
${
go_binary_DEPS
}
)
add_custom_target
(
${
TARGET_NAME
}
ALL DEPENDS go_vendor
${
TARGET_NAME
}
_timestamp
${
go_binary_DEPS
}
)
install
(
PROGRAMS
${
CMAKE_CURRENT_BINARY_DIR
}
/
${
TARGET_NAME
}
DESTINATION bin
)
install
(
PROGRAMS
${
CMAKE_CURRENT_BINARY_DIR
}
/
${
TARGET_NAME
}
DESTINATION bin
)
...
@@ -332,3 +354,12 @@ function(proto_library TARGET_NAME)
...
@@ -332,3 +354,12 @@ function(proto_library TARGET_NAME)
protobuf_generate_cpp
(
proto_srcs proto_hdrs
${
proto_library_SRCS
}
)
protobuf_generate_cpp
(
proto_srcs proto_hdrs
${
proto_library_SRCS
}
)
cc_library
(
${
TARGET_NAME
}
SRCS
${
proto_srcs
}
DEPS
${
proto_library_DEPS
}
protobuf
)
cc_library
(
${
TARGET_NAME
}
SRCS
${
proto_srcs
}
DEPS
${
proto_library_DEPS
}
protobuf
)
endfunction
()
endfunction
()
function
(
py_proto_compile TARGET_NAME
)
set
(
oneValueArgs
""
)
set
(
multiValueArgs SRCS
)
cmake_parse_arguments
(
py_proto_compile
"
${
options
}
"
"
${
oneValueArgs
}
"
"
${
multiValueArgs
}
"
${
ARGN
}
)
set
(
py_srcs
)
protobuf_generate_python
(
py_srcs
${
py_proto_compile_SRCS
}
)
add_custom_target
(
${
TARGET_NAME
}
ALL DEPENDS
${
py_srcs
}
)
endfunction
()
doc_theme/templates/layout.html
浏览文件 @
3aa67981
...
@@ -101,7 +101,7 @@
...
@@ -101,7 +101,7 @@
</div>
</div>
<div
class=
"site-nav-links"
>
<div
class=
"site-nav-links"
>
<div
class=
"site-menu"
>
<div
class=
"site-menu"
>
<a
class=
"fork-on-github"
href=
"https://github.com/PaddlePaddle/Paddle"
target=
"_blank"
><i
class=
"fa fa-github"
></i>
Fo
l
k me on Github
</a>
<a
class=
"fork-on-github"
href=
"https://github.com/PaddlePaddle/Paddle"
target=
"_blank"
><i
class=
"fa fa-github"
></i>
Fo
r
k me on Github
</a>
<div
class=
"language-switcher dropdown"
>
<div
class=
"language-switcher dropdown"
>
<a
type=
"button"
data-toggle=
"dropdown"
>
<a
type=
"button"
data-toggle=
"dropdown"
>
<span>
English
</span>
<span>
English
</span>
...
...
go/cmd/master/CMakeLists.txt
浏览文件 @
3aa67981
...
@@ -12,4 +12,4 @@
...
@@ -12,4 +12,4 @@
# See the License for the specific language governing permissions and
# See the License for the specific language governing permissions and
# limitations under the License.
# limitations under the License.
go_binary
(
master SRC master.go
)
go_binary
(
master SRC master.go
DEPS paddle_go_optimizer
)
go/cmd/pserver/CMakeLists.txt
浏览文件 @
3aa67981
...
@@ -12,4 +12,4 @@
...
@@ -12,4 +12,4 @@
# See the License for the specific language governing permissions and
# See the License for the specific language governing permissions and
# limitations under the License.
# limitations under the License.
go_binary
(
pserver SRCS pserver.go
)
go_binary
(
pserver SRCS pserver.go
DEPS paddle_go_optimizer
)
go/master/c/client.go
浏览文件 @
3aa67981
...
@@ -104,11 +104,22 @@ func paddle_set_dataset(client C.paddle_master_client, path **C.char, size C.int
...
@@ -104,11 +104,22 @@ func paddle_set_dataset(client C.paddle_master_client, path **C.char, size C.int
return
C
.
PADDLE_MASTER_OK
return
C
.
PADDLE_MASTER_OK
}
}
// return value:
// 0:ok
// -1:error
//export paddle_next_record
//export paddle_next_record
func
paddle_next_record
(
client
C
.
paddle_master_client
,
record
**
C
.
uchar
)
C
.
int
{
func
paddle_next_record
(
client
C
.
paddle_master_client
,
record
**
C
.
uchar
)
C
.
int
{
c
:=
get
(
client
)
c
:=
get
(
client
)
r
:=
c
.
NextRecord
()
r
,
err
:=
c
.
NextRecord
()
if
err
!=
nil
{
// Error
// TODO: return the type of error?
*
record
=
(
*
C
.
uchar
)(
nullPtr
)
return
-
1
}
if
len
(
r
)
==
0
{
if
len
(
r
)
==
0
{
// Empty record
*
record
=
(
*
C
.
uchar
)(
nullPtr
)
*
record
=
(
*
C
.
uchar
)(
nullPtr
)
return
0
return
0
}
}
...
...
go/master/client.go
浏览文件 @
3aa67981
...
@@ -11,7 +11,12 @@ import (
...
@@ -11,7 +11,12 @@ import (
// Client is the client of the master server.
// Client is the client of the master server.
type
Client
struct
{
type
Client
struct
{
conn
*
connection
.
Conn
conn
*
connection
.
Conn
ch
chan
[]
byte
ch
chan
record
}
type
record
struct
{
r
[]
byte
err
error
}
}
// NewClient creates a new Client.
// NewClient creates a new Client.
...
@@ -21,7 +26,7 @@ type Client struct {
...
@@ -21,7 +26,7 @@ type Client struct {
func
NewClient
(
addrCh
<-
chan
string
,
bufSize
int
)
*
Client
{
func
NewClient
(
addrCh
<-
chan
string
,
bufSize
int
)
*
Client
{
c
:=
&
Client
{}
c
:=
&
Client
{}
c
.
conn
=
connection
.
New
()
c
.
conn
=
connection
.
New
()
c
.
ch
=
make
(
chan
[]
byte
,
bufSize
)
c
.
ch
=
make
(
chan
record
,
bufSize
)
go
c
.
monitorMaster
(
addrCh
)
go
c
.
monitorMaster
(
addrCh
)
go
c
.
getRecords
()
go
c
.
getRecords
()
return
c
return
c
...
@@ -46,10 +51,11 @@ func (c *Client) getRecords() {
...
@@ -46,10 +51,11 @@ func (c *Client) getRecords() {
s
:=
recordio
.
NewRangeScanner
(
f
,
&
chunk
.
Index
,
-
1
,
-
1
)
s
:=
recordio
.
NewRangeScanner
(
f
,
&
chunk
.
Index
,
-
1
,
-
1
)
for
s
.
Scan
()
{
for
s
.
Scan
()
{
c
.
ch
<-
s
.
Record
()
c
.
ch
<-
record
{
s
.
Record
(),
nil
}
}
}
if
s
.
Err
()
!=
nil
{
if
s
.
Err
()
!=
nil
{
c
.
ch
<-
record
{
nil
,
s
.
Err
()}
log
.
Errorln
(
err
,
chunk
.
Path
)
log
.
Errorln
(
err
,
chunk
.
Path
)
}
}
...
@@ -116,6 +122,7 @@ func (c *Client) taskFinished(taskID int) error {
...
@@ -116,6 +122,7 @@ func (c *Client) taskFinished(taskID int) error {
//
//
// NextRecord will block until the next record is available. It is
// NextRecord will block until the next record is available. It is
// thread-safe.
// thread-safe.
func
(
c
*
Client
)
NextRecord
()
[]
byte
{
func
(
c
*
Client
)
NextRecord
()
([]
byte
,
error
)
{
return
<-
c
.
ch
r
:=
<-
c
.
ch
return
r
.
r
,
r
.
err
}
}
go/master/client_test.go
浏览文件 @
3aa67981
...
@@ -68,12 +68,17 @@ func TestNextRecord(t *testing.T) {
...
@@ -68,12 +68,17 @@ func TestNextRecord(t *testing.T) {
for
pass
:=
0
;
pass
<
50
;
pass
++
{
for
pass
:=
0
;
pass
<
50
;
pass
++
{
received
:=
make
(
map
[
byte
]
bool
)
received
:=
make
(
map
[
byte
]
bool
)
for
i
:=
0
;
i
<
total
;
i
++
{
for
i
:=
0
;
i
<
total
;
i
++
{
r
:=
c
.
NextRecord
()
r
,
err
:=
c
.
NextRecord
()
if
err
!=
nil
{
t
.
Fatal
(
pass
,
i
,
"Read error:"
,
err
)
}
if
len
(
r
)
!=
1
{
if
len
(
r
)
!=
1
{
t
.
Fatal
(
"Length should be 1."
,
r
)
t
.
Fatal
(
pass
,
i
,
"Length should be 1."
,
r
)
}
}
if
received
[
r
[
0
]]
{
if
received
[
r
[
0
]]
{
t
.
Fatal
(
"Received duplicate."
,
received
,
r
)
t
.
Fatal
(
pass
,
i
,
"Received duplicate."
,
received
,
r
)
}
}
received
[
r
[
0
]]
=
true
received
[
r
[
0
]]
=
true
}
}
...
...
go/pserver/client/c/CMakeLists.txt
浏览文件 @
3aa67981
cc_library
(
paddle_go_optimizer DEPS paddle_optimizer paddle_proto glog gflags protobuf
)
cc_library
(
paddle_go_optimizer DEPS paddle_optimizer paddle_proto glog gflags protobuf
)
target_link_libraries
(
paddle_go_optimizer stdc++ m
)
go_library
(
paddle_pserver_cclient STATIC DEPS paddle_go_optimizer
)
go_library
(
paddle_pserver_cclient STATIC DEPS paddle_go_optimizer
)
if
(
WITH_TESTING
)
if
(
WITH_TESTING
)
add_subdirectory
(
test
)
# FIXME: this test requires pserver which is not managed by the test
# we need some kind of e2e testing machanism.
# add_subdirectory(test)
endif
()
endif
()
go/pserver/client/c/test/CMakeLists.txt
浏览文件 @
3aa67981
cc_test
(
test_cclient SRCS test_cclient.c DEPS paddle_pserver_cclient
)
cc_test
(
test_cclient SRCS test_cclient.c DEPS paddle_pserver_cclient
paddle_go_optimizer
)
add_style_check_target
(
test_cclient test_cclient.c
)
add_style_check_target
(
test_cclient test_cclient.c
)
go/pserver/optimizer.go
浏览文件 @
3aa67981
...
@@ -2,7 +2,7 @@ package pserver
...
@@ -2,7 +2,7 @@ package pserver
// #cgo CFLAGS: -I ../../
// #cgo CFLAGS: -I ../../
// //FIXME: ldflags contain "build" path
// //FIXME: ldflags contain "build" path
// #cgo LDFLAGS: ../../build/go/pserver/client/c/libpaddle_go_optimizer.a -lstdc++ -lm
// #cgo LDFLAGS:
${SRCDIR}/
../../build/go/pserver/client/c/libpaddle_go_optimizer.a -lstdc++ -lm
// #include "paddle/optimizer/optimizer.h"
// #include "paddle/optimizer/optimizer.h"
// #include <stdlib.h>
// #include <stdlib.h>
// #include <string.h>
// #include <string.h>
...
@@ -56,8 +56,8 @@ func newOptimizer(paramWithConfigs ParameterWithConfig) *optimizer {
...
@@ -56,8 +56,8 @@ func newOptimizer(paramWithConfigs ParameterWithConfig) *optimizer {
func
(
o
*
optimizer
)
GetWeights
()
[]
byte
{
func
(
o
*
optimizer
)
GetWeights
()
[]
byte
{
var
buffer
unsafe
.
Pointer
var
buffer
unsafe
.
Pointer
buffer
_l
en
:=
C
.
paddle_optimizer_get_weights
(
o
.
opt
,
&
buffer
)
buffer
L
en
:=
C
.
paddle_optimizer_get_weights
(
o
.
opt
,
&
buffer
)
return
cArrayToSlice
(
buffer
,
int
(
buffer
_l
en
)
*
C
.
sizeof_float
)
return
cArrayToSlice
(
buffer
,
int
(
buffer
L
en
)
*
C
.
sizeof_float
)
}
}
func
(
o
*
optimizer
)
UpdateParameter
(
g
Gradient
)
error
{
func
(
o
*
optimizer
)
UpdateParameter
(
g
Gradient
)
error
{
...
...
go/pserver/service.go
浏览文件 @
3aa67981
...
@@ -10,8 +10,10 @@ import (
...
@@ -10,8 +10,10 @@ import (
type
ElementType
int
type
ElementType
int
const
(
const
(
// AlreadyInitialized is true if pserver is initialized
AlreadyInitialized
=
"pserver already initialized"
AlreadyInitialized
=
"pserver already initialized"
Uninitialized
=
"pserver not fully initialized"
// Uninitialized is true if pserver not fully initialized
Uninitialized
=
"pserver not fully initialized"
)
)
// Supported element types
// Supported element types
...
@@ -55,7 +57,7 @@ func NewService(idx int) (*Service, error) {
...
@@ -55,7 +57,7 @@ func NewService(idx int) (*Service, error) {
s
:=
&
Service
{
s
:=
&
Service
{
idx
:
idx
,
idx
:
idx
,
}
}
s
.
optMap
=
make
(
map
[
string
]
*
optimizer
)
s
.
optMap
=
make
(
map
[
string
]
*
optimizer
)
s
.
initialized
=
make
(
chan
struct
{})
s
.
initialized
=
make
(
chan
struct
{})
return
s
,
nil
return
s
,
nil
}
}
...
...
paddle/api/CMakeLists.txt
浏览文件 @
3aa67981
...
@@ -66,6 +66,7 @@ SWIG_LINK_LIBRARIES(swig_paddle
...
@@ -66,6 +66,7 @@ SWIG_LINK_LIBRARIES(swig_paddle
paddle_trainer_lib
paddle_trainer_lib
paddle_network
paddle_network
paddle_parameter
paddle_parameter
paddle_optimizer
paddle_math
paddle_math
paddle_utils
paddle_utils
paddle_proto
paddle_proto
...
...
paddle/framework/CMakeLists.txt
浏览文件 @
3aa67981
...
@@ -9,6 +9,10 @@ cc_test(enforce_test SRCS enforce_test.cc)
...
@@ -9,6 +9,10 @@ cc_test(enforce_test SRCS enforce_test.cc)
proto_library
(
attr_type SRCS attr_type.proto
)
proto_library
(
attr_type SRCS attr_type.proto
)
proto_library
(
op_proto SRCS op_proto.proto DEPS attr_type
)
proto_library
(
op_proto SRCS op_proto.proto DEPS attr_type
)
cc_test
(
op_proto_test SRCS op_proto_test.cc DEPS op_proto protobuf
)
cc_test
(
op_proto_test SRCS op_proto_test.cc DEPS op_proto protobuf
)
proto_library
(
op_desc SRCS op_desc.proto DEPS attr_type
)
proto_library
(
op_desc SRCS op_desc.proto DEPS attr_type
)
cc_test
(
op_desc_test SRCS op_desc_test.cc DEPS op_desc protobuf
)
cc_test
(
op_desc_test SRCS op_desc_test.cc DEPS op_desc protobuf
)
cc_test
(
op_registry_test SRCS op_registry_test.cc DEPS op_proto op_desc
)
py_proto_compile
(
framework_py_proto SRCS attr_type.proto op_proto.proto op_desc.proto
)
# Generate an empty __init__.py to make framework_py_proto as a valid python module.
add_custom_target
(
framework_py_proto_init ALL COMMAND
${
CMAKE_COMMAND
}
-E touch __init__.py
)
add_dependencies
(
framework_py_proto framework_py_proto_init
)
paddle/framework/attr_checker.h
0 → 100644
浏览文件 @
3aa67981
#pragma once
#include <boost/variant.hpp>
#include <functional>
#include <string>
#include <unordered_map>
#include <vector>
#include "paddle/framework/enforce.h"
namespace
paddle
{
namespace
framework
{
typedef
boost
::
variant
<
boost
::
blank
,
int
,
float
,
std
::
string
,
std
::
vector
<
int
>
,
std
::
vector
<
float
>
,
std
::
vector
<
std
::
string
>>
Attribute
;
typedef
std
::
unordered_map
<
std
::
string
,
Attribute
>
AttributeMap
;
// check whether a value(attribute) fit a certain limit
template
<
typename
T
>
class
LargerThanChecker
{
public:
LargerThanChecker
(
T
lower_bound
)
:
lower_bound_
(
lower_bound
)
{}
void
operator
()(
T
&
value
)
const
{
PADDLE_ENFORCE
(
value
>
lower_bound_
,
"larger_than check fail"
);
}
private:
T
lower_bound_
;
};
// we can provide users more common Checker, like 'LessThanChecker',
// 'BetweenChecker'...
template
<
typename
T
>
class
DefaultValueSetter
{
public:
DefaultValueSetter
(
T
default_value
)
:
default_value_
(
default_value
)
{}
void
operator
()(
T
&
value
)
const
{
value
=
default_value_
;
}
private:
T
default_value_
;
};
// check whether a certain attribute fit its limits
// an attribute can have more than one limits
template
<
typename
T
>
class
TypedAttrChecker
{
typedef
std
::
function
<
void
(
T
&
)
>
ValueChecker
;
public:
TypedAttrChecker
(
const
std
::
string
&
attr_name
)
:
attr_name_
(
attr_name
)
{}
TypedAttrChecker
&
LargerThan
(
const
T
&
lower_bound
)
{
value_checkers_
.
push_back
(
LargerThanChecker
<
T
>
(
lower_bound
));
return
*
this
;
}
// we can add more common limits, like LessThan(), Between()...
TypedAttrChecker
&
SetDefault
(
const
T
&
default_value
)
{
PADDLE_ENFORCE
(
default_value_setter_
.
empty
(),
"%s can't have more than one default value!"
,
attr_name_
);
default_value_setter_
.
push_back
(
DefaultValueSetter
<
T
>
(
default_value
));
return
*
this
;
}
// allow users provide their own checker
TypedAttrChecker
&
AddCustomChecker
(
const
ValueChecker
&
checker
)
{
value_checkers_
.
push_back
(
checker
);
return
*
this
;
}
void
operator
()(
AttributeMap
&
attr_map
)
const
{
if
(
!
attr_map
.
count
(
attr_name_
))
{
// user do not set this attr
PADDLE_ENFORCE
(
!
default_value_setter_
.
empty
(),
"Attribute '%s' is required!"
,
attr_name_
);
// default_value_setter_ has no more than one element
T
val
;
(
default_value_setter_
[
0
])(
val
);
attr_map
[
attr_name_
]
=
val
;
}
Attribute
&
attr
=
attr_map
.
at
(
attr_name_
);
T
&
attr_value
=
boost
::
get
<
T
>
(
attr
);
for
(
const
auto
&
checker
:
value_checkers_
)
{
checker
(
attr_value
);
}
}
private:
std
::
string
attr_name_
;
std
::
vector
<
ValueChecker
>
value_checkers_
;
std
::
vector
<
ValueChecker
>
default_value_setter_
;
};
// check whether op's all attributes fit their own limits
class
OpAttrChecker
{
typedef
std
::
function
<
void
(
AttributeMap
&
)
>
AttrChecker
;
public:
template
<
typename
T
>
TypedAttrChecker
<
T
>&
AddAttrChecker
(
const
std
::
string
&
attr_name
)
{
attr_checkers_
.
push_back
(
TypedAttrChecker
<
T
>
(
attr_name
));
AttrChecker
&
checker
=
attr_checkers_
.
back
();
return
*
(
checker
.
target
<
TypedAttrChecker
<
T
>>
());
}
void
Check
(
AttributeMap
&
attr_map
)
const
{
for
(
const
auto
&
checker
:
attr_checkers_
)
{
checker
(
attr_map
);
}
}
private:
std
::
vector
<
AttrChecker
>
attr_checkers_
;
};
}
// namespace framework
}
// namespace paddle
paddle/framework/op_registry.h
0 → 100644
浏览文件 @
3aa67981
#pragma once
#include "paddle/framework/attr_checker.h"
//#include "paddle/framework/op_base.h"
#include "paddle/framework/op_desc.pb.h"
#include "paddle/framework/op_proto.pb.h"
namespace
paddle
{
namespace
framework
{
//==================For test================//
class
OpBase
{
public:
std
::
vector
<
std
::
string
>
inputs_
;
std
::
vector
<
std
::
string
>
outputs_
;
AttributeMap
attr_map_
;
virtual
std
::
string
Run
()
const
=
0
;
virtual
~
OpBase
()
{}
};
//=========================================//
// helper class to set attribute type
struct
AttrTypeHelper
{
template
<
typename
T
>
static
void
SetAttrType
(
AttrProto
*
attr
);
static
Attribute
GetAttrValue
(
const
AttrDesc
&
attr_desc
)
{
switch
(
attr_desc
.
type
())
{
case
paddle
::
framework
::
AttrType
::
INT
:
{
return
attr_desc
.
i
();
}
case
paddle
::
framework
::
AttrType
::
FLOAT
:
{
return
attr_desc
.
f
();
}
case
paddle
::
framework
::
AttrType
::
STRING
:
{
return
attr_desc
.
s
();
}
case
paddle
::
framework
::
AttrType
::
INTS
:
{
std
::
vector
<
int
>
val
(
attr_desc
.
ints_size
());
for
(
int
i
=
0
;
i
<
attr_desc
.
ints_size
();
++
i
)
{
val
[
i
]
=
attr_desc
.
ints
(
i
);
}
return
val
;
}
case
paddle
::
framework
::
AttrType
::
FLOATS
:
{
std
::
vector
<
float
>
val
(
attr_desc
.
floats_size
());
for
(
int
i
=
0
;
i
<
attr_desc
.
floats_size
();
++
i
)
{
val
[
i
]
=
attr_desc
.
floats
(
i
);
}
return
val
;
}
case
paddle
::
framework
::
AttrType
::
STRINGS
:
{
std
::
vector
<
std
::
string
>
val
(
attr_desc
.
strings_size
());
for
(
int
i
=
0
;
i
<
attr_desc
.
strings_size
();
++
i
)
{
val
[
i
]
=
attr_desc
.
strings
(
i
);
}
return
val
;
}
}
PADDLE_ENFORCE
(
false
,
"Unknown OpDesc::AttrDesc::type !"
);
return
boost
::
blank
();
}
};
template
<
>
void
AttrTypeHelper
::
SetAttrType
<
int
>
(
AttrProto
*
attr
)
{
attr
->
set_type
(
paddle
::
framework
::
AttrType
::
INT
);
}
template
<
>
void
AttrTypeHelper
::
SetAttrType
<
float
>
(
AttrProto
*
attr
)
{
attr
->
set_type
(
paddle
::
framework
::
AttrType
::
FLOAT
);
}
template
<
>
void
AttrTypeHelper
::
SetAttrType
<
std
::
string
>
(
AttrProto
*
attr
)
{
attr
->
set_type
(
paddle
::
framework
::
AttrType
::
STRING
);
}
template
<
>
void
AttrTypeHelper
::
SetAttrType
<
std
::
vector
<
int
>>
(
AttrProto
*
attr
)
{
attr
->
set_type
(
paddle
::
framework
::
AttrType
::
INTS
);
}
template
<
>
void
AttrTypeHelper
::
SetAttrType
<
std
::
vector
<
float
>>
(
AttrProto
*
attr
)
{
attr
->
set_type
(
paddle
::
framework
::
AttrType
::
FLOATS
);
}
template
<
>
void
AttrTypeHelper
::
SetAttrType
<
std
::
vector
<
std
::
string
>>
(
AttrProto
*
attr
)
{
attr
->
set_type
(
paddle
::
framework
::
AttrType
::
STRINGS
);
}
// this class not only make proto but also init attribute checkers.
class
OpProtoAndCheckerMaker
{
public:
OpProtoAndCheckerMaker
(
OpProto
*
proto
,
OpAttrChecker
*
op_checker
)
:
proto_
(
proto
),
op_checker_
(
op_checker
)
{}
protected:
void
AddInput
(
const
std
::
string
&
name
,
const
std
::
string
&
comment
)
{
auto
input
=
proto_
->
mutable_inputs
()
->
Add
();
*
(
input
->
mutable_name
())
=
name
;
*
(
input
->
mutable_comment
())
=
comment
;
}
void
AddOutput
(
const
std
::
string
&
name
,
const
std
::
string
&
comment
)
{
auto
output
=
proto_
->
mutable_outputs
()
->
Add
();
*
(
output
->
mutable_name
())
=
name
;
*
(
output
->
mutable_comment
())
=
comment
;
}
template
<
typename
T
>
TypedAttrChecker
<
T
>&
AddAttr
(
const
std
::
string
&
name
,
const
std
::
string
&
comment
)
{
auto
attr
=
proto_
->
mutable_attrs
()
->
Add
();
*
(
attr
->
mutable_name
())
=
name
;
*
(
attr
->
mutable_comment
())
=
comment
;
AttrTypeHelper
::
SetAttrType
<
T
>
(
attr
);
return
op_checker_
->
AddAttrChecker
<
T
>
(
name
);
}
void
AddType
(
const
std
::
string
&
op_type
)
{
proto_
->
set_type
(
op_type
);
}
void
AddComment
(
const
std
::
string
&
comment
)
{
*
(
proto_
->
mutable_comment
())
=
comment
;
}
OpProto
*
proto_
;
OpAttrChecker
*
op_checker_
;
};
class
OpRegistry
{
typedef
std
::
function
<
OpBase
*
()
>
OpCreator
;
public:
template
<
typename
OpType
,
typename
ProtoMakerType
>
static
void
RegisterOp
(
const
std
::
string
&
op_type
)
{
creators_
[
op_type
]
=
[]()
{
return
new
OpType
;
};
OpProto
&
op_proto
=
protos_
[
op_type
];
OpAttrChecker
&
op_checker
=
op_checkers_
[
op_type
];
ProtoMakerType
(
&
op_proto
,
&
op_checker
);
PADDLE_ENFORCE
(
op_proto
.
IsInitialized
()
==
true
,
"Fail to initialize %s's OpProto !"
,
op_type
);
}
static
OpBase
*
CreateOp
(
const
OpDesc
&
op_desc
)
{
std
::
string
op_type
=
op_desc
.
type
();
OpBase
*
op
=
(
creators_
.
at
(
op_type
))();
(
op
->
inputs_
).
resize
(
op_desc
.
inputs_size
());
for
(
int
i
=
0
;
i
<
op_desc
.
inputs_size
();
++
i
)
{
(
op
->
inputs_
)[
i
]
=
op_desc
.
inputs
(
i
);
}
(
op
->
outputs_
).
resize
(
op_desc
.
outputs_size
());
for
(
int
i
=
0
;
i
<
op_desc
.
outputs_size
();
++
i
)
{
(
op
->
outputs_
)[
i
]
=
op_desc
.
outputs
(
i
);
}
for
(
int
i
=
0
;
i
<
op_desc
.
attrs_size
();
++
i
)
{
const
AttrDesc
&
ith_attr
=
op_desc
.
attrs
(
i
);
std
::
string
name
=
ith_attr
.
name
();
(
op
->
attr_map_
)[
name
]
=
AttrTypeHelper
::
GetAttrValue
(
ith_attr
);
}
const
OpAttrChecker
&
op_checker
=
op_checkers_
.
at
(
op_type
);
op_checker
.
Check
(
op
->
attr_map_
);
return
op
;
}
private:
static
std
::
unordered_map
<
std
::
string
,
OpCreator
>
creators_
;
static
std
::
unordered_map
<
std
::
string
,
OpProto
>
protos_
;
static
std
::
unordered_map
<
std
::
string
,
OpAttrChecker
>
op_checkers_
;
};
std
::
unordered_map
<
std
::
string
,
std
::
function
<
OpBase
*
()
>>
OpRegistry
::
creators_
;
std
::
unordered_map
<
std
::
string
,
OpProto
>
OpRegistry
::
protos_
;
std
::
unordered_map
<
std
::
string
,
OpAttrChecker
>
OpRegistry
::
op_checkers_
;
template
<
typename
OpType
,
typename
ProtoMakerType
>
class
OpRegisterHelper
{
public:
OpRegisterHelper
(
std
::
string
op_type
)
{
OpRegistry
::
RegisterOp
<
OpType
,
ProtoMakerType
>
(
op_type
);
}
};
#define REGISTER_OP(__op_class, __op_maker_class, __op_type) \
class __op_class##Register { \
private: \
const static OpRegisterHelper<__op_class, __op_maker_class> reg; \
}; \
const OpRegisterHelper<__op_class, __op_maker_class> \
__op_class##Register::reg(#__op_type);
// Demos
class
CosineOp
:
public
OpBase
{
public:
virtual
std
::
string
Run
()
const
{
std
::
string
msg
=
"CosineOp runs! scale = "
+
std
::
to_string
(
boost
::
get
<
float
>
(
attr_map_
.
at
(
"scale"
)));
return
msg
;
}
};
class
CosineOpProtoAndCheckerMaker
:
public
OpProtoAndCheckerMaker
{
public:
CosineOpProtoAndCheckerMaker
(
OpProto
*
proto
,
OpAttrChecker
*
op_checker
)
:
OpProtoAndCheckerMaker
(
proto
,
op_checker
)
{
AddInput
(
"input"
,
"input of cosine op"
);
AddOutput
(
"output"
,
"output of cosine op"
);
AddAttr
<
float
>
(
"scale"
,
"scale of cosine op"
)
.
SetDefault
(
1.0
)
.
LargerThan
(
0.0
);
AddType
(
"cos"
);
AddComment
(
"This is cos op"
);
}
};
REGISTER_OP
(
CosineOp
,
CosineOpProtoAndCheckerMaker
,
cos_sim
)
class
MyTestOp
:
public
OpBase
{
public:
virtual
std
::
string
Run
()
const
{
std
::
string
msg
=
"MyTestOp runs! test_attr = "
+
std
::
to_string
(
boost
::
get
<
int
>
(
attr_map_
.
at
(
"test_attr"
)));
return
msg
;
}
};
class
MyTestOpProtoAndCheckerMaker
:
public
OpProtoAndCheckerMaker
{
public:
MyTestOpProtoAndCheckerMaker
(
OpProto
*
proto
,
OpAttrChecker
*
op_checker
)
:
OpProtoAndCheckerMaker
(
proto
,
op_checker
)
{
AddInput
(
"input"
,
"input of cosine op"
);
AddOutput
(
"output"
,
"output of cosine op"
);
auto
my_checker
=
[](
int
i
)
{
PADDLE_ENFORCE
(
i
%
2
==
0
,
"'test_attr' must be even!"
);
};
AddAttr
<
int
>
(
"test_attr"
,
"a simple test attribute"
)
.
AddCustomChecker
(
my_checker
);
AddType
(
"my_test_op"
);
AddComment
(
"This is my_test op"
);
}
};
REGISTER_OP
(
MyTestOp
,
MyTestOpProtoAndCheckerMaker
,
my_test_op
)
}
// namespace framework
}
// namespace paddle
paddle/framework/op_registry_test.cc
0 → 100644
浏览文件 @
3aa67981
#include "paddle/framework/op_registry.h"
#include <gtest/gtest.h>
TEST
(
OpRegistry
,
CreateOp
)
{
paddle
::
framework
::
OpDesc
op_desc
;
op_desc
.
set_type
(
"cos_sim"
);
op_desc
.
add_inputs
(
"aa"
);
op_desc
.
add_outputs
(
"bb"
);
auto
attr
=
op_desc
.
mutable_attrs
()
->
Add
();
attr
->
set_name
(
"scale"
);
attr
->
set_type
(
paddle
::
framework
::
AttrType
::
FLOAT
);
attr
->
set_f
(
3.3
);
paddle
::
framework
::
OpBase
*
op
=
paddle
::
framework
::
OpRegistry
::
CreateOp
(
op_desc
);
std
::
string
debug_str
=
op
->
Run
();
std
::
string
str
=
"CosineOp runs! scale = "
+
std
::
to_string
(
3.3
);
ASSERT_EQ
(
str
.
size
(),
debug_str
.
size
());
for
(
size_t
i
=
0
;
i
<
debug_str
.
length
();
++
i
)
{
ASSERT_EQ
(
debug_str
[
i
],
str
[
i
]);
}
}
TEST
(
OpRegistry
,
IllegalAttr
)
{
paddle
::
framework
::
OpDesc
op_desc
;
op_desc
.
set_type
(
"cos_sim"
);
op_desc
.
add_inputs
(
"aa"
);
op_desc
.
add_outputs
(
"bb"
);
auto
attr
=
op_desc
.
mutable_attrs
()
->
Add
();
attr
->
set_name
(
"scale"
);
attr
->
set_type
(
paddle
::
framework
::
AttrType
::
FLOAT
);
attr
->
set_f
(
-
2.0
);
bool
caught
=
false
;
try
{
paddle
::
framework
::
OpBase
*
op
__attribute__
((
unused
))
=
paddle
::
framework
::
OpRegistry
::
CreateOp
(
op_desc
);
}
catch
(
paddle
::
framework
::
EnforceNotMet
err
)
{
caught
=
true
;
std
::
string
msg
=
"larger_than check fail"
;
const
char
*
err_msg
=
err
.
what
();
for
(
size_t
i
=
0
;
i
<
msg
.
length
();
++
i
)
{
ASSERT_EQ
(
err_msg
[
i
],
msg
[
i
]);
}
}
ASSERT_TRUE
(
caught
);
}
TEST
(
OpRegistry
,
DefaultValue
)
{
paddle
::
framework
::
OpDesc
op_desc
;
op_desc
.
set_type
(
"cos_sim"
);
op_desc
.
add_inputs
(
"aa"
);
op_desc
.
add_outputs
(
"bb"
);
paddle
::
framework
::
OpBase
*
op
=
paddle
::
framework
::
OpRegistry
::
CreateOp
(
op_desc
);
std
::
string
debug_str
=
op
->
Run
();
float
default_value
=
1.0
;
std
::
string
str
=
"CosineOp runs! scale = "
+
std
::
to_string
(
default_value
);
ASSERT_EQ
(
str
.
size
(),
debug_str
.
size
());
for
(
size_t
i
=
0
;
i
<
debug_str
.
length
();
++
i
)
{
ASSERT_EQ
(
debug_str
[
i
],
str
[
i
]);
}
}
TEST
(
OpRegistry
,
CustomChecker
)
{
paddle
::
framework
::
OpDesc
op_desc
;
op_desc
.
set_type
(
"my_test_op"
);
op_desc
.
add_inputs
(
"ii"
);
op_desc
.
add_outputs
(
"oo"
);
// attr 'test_attr' is not set
bool
caught
=
false
;
try
{
paddle
::
framework
::
OpBase
*
op
__attribute__
((
unused
))
=
paddle
::
framework
::
OpRegistry
::
CreateOp
(
op_desc
);
}
catch
(
paddle
::
framework
::
EnforceNotMet
err
)
{
caught
=
true
;
std
::
string
msg
=
"Attribute 'test_attr' is required!"
;
const
char
*
err_msg
=
err
.
what
();
for
(
size_t
i
=
0
;
i
<
msg
.
length
();
++
i
)
{
ASSERT_EQ
(
err_msg
[
i
],
msg
[
i
]);
}
}
ASSERT_TRUE
(
caught
);
// set 'test_attr' set to an illegal value
auto
attr
=
op_desc
.
mutable_attrs
()
->
Add
();
attr
->
set_name
(
"test_attr"
);
attr
->
set_type
(
paddle
::
framework
::
AttrType
::
INT
);
attr
->
set_i
(
3
);
caught
=
false
;
try
{
paddle
::
framework
::
OpBase
*
op
__attribute__
((
unused
))
=
paddle
::
framework
::
OpRegistry
::
CreateOp
(
op_desc
);
}
catch
(
paddle
::
framework
::
EnforceNotMet
err
)
{
caught
=
true
;
std
::
string
msg
=
"'test_attr' must be even!"
;
const
char
*
err_msg
=
err
.
what
();
for
(
size_t
i
=
0
;
i
<
msg
.
length
();
++
i
)
{
ASSERT_EQ
(
err_msg
[
i
],
msg
[
i
]);
}
}
ASSERT_TRUE
(
caught
);
// set 'test_attr' set to a legal value
op_desc
.
mutable_attrs
()
->
Clear
();
attr
=
op_desc
.
mutable_attrs
()
->
Add
();
attr
->
set_name
(
"test_attr"
);
attr
->
set_type
(
paddle
::
framework
::
AttrType
::
INT
);
attr
->
set_i
(
4
);
paddle
::
framework
::
OpBase
*
op
=
paddle
::
framework
::
OpRegistry
::
CreateOp
(
op_desc
);
std
::
string
debug_str
=
op
->
Run
();
std
::
string
str
=
"MyTestOp runs! test_attr = "
+
std
::
to_string
(
4
);
ASSERT_EQ
(
str
.
size
(),
debug_str
.
size
());
for
(
size_t
i
=
0
;
i
<
debug_str
.
length
();
++
i
)
{
ASSERT_EQ
(
debug_str
[
i
],
str
[
i
]);
}
}
\ No newline at end of file
paddle/gserver/layers/AverageLayer.h
浏览文件 @
3aa67981
...
@@ -25,6 +25,10 @@ namespace paddle {
...
@@ -25,6 +25,10 @@ namespace paddle {
* If SequenceLevel = kNonSeq:
* If SequenceLevel = kNonSeq:
* Output: output size is the number of input sequences (NOT input instances)
* Output: output size is the number of input sequences (NOT input instances)
* output[i] = average_{for each instance in this sequence}{input[i]}
* output[i] = average_{for each instance in this sequence}{input[i]}
* If stride_ > 0:
* Output: a shorten sequence. Stride is the step size by which we slide a
* window upon the input sequence, and the average pooling
* operation is then applied to each interval independently.
* If SequenceLevel = kSeq:
* If SequenceLevel = kSeq:
* Check input sequence must has sub-sequence
* Check input sequence must has sub-sequence
* Output: output size is the number of input sub-sequences
* Output: output size is the number of input sub-sequences
...
...
paddle/gserver/layers/CrossChannelNormLayer.cpp
浏览文件 @
3aa67981
...
@@ -36,6 +36,16 @@ MatrixPtr CrossChannelNormLayer::createSpatialMatrix(MatrixPtr data,
...
@@ -36,6 +36,16 @@ MatrixPtr CrossChannelNormLayer::createSpatialMatrix(MatrixPtr data,
data
->
getData
()
+
iter
*
spatialDim
,
1
,
spatialDim
,
false
,
useGpu_
);
data
->
getData
()
+
iter
*
spatialDim
,
1
,
spatialDim
,
false
,
useGpu_
);
}
}
bool
CrossChannelNormLayer
::
init
(
const
LayerMap
&
layerMap
,
const
ParameterMap
&
parameterMap
)
{
Layer
::
init
(
layerMap
,
parameterMap
);
CHECK
(
parameters_
[
0
]);
const
NormConfig
&
conf
=
config_
.
inputs
(
0
).
norm_conf
();
channels_
=
conf
.
channels
();
scale_
.
reset
(
new
Weight
(
channels_
,
1
,
parameters_
[
0
]));
return
true
;
}
void
CrossChannelNormLayer
::
forward
(
PassType
passType
)
{
void
CrossChannelNormLayer
::
forward
(
PassType
passType
)
{
Layer
::
forward
(
passType
);
Layer
::
forward
(
passType
);
MatrixPtr
inV
=
getInputValue
(
0
);
MatrixPtr
inV
=
getInputValue
(
0
);
...
@@ -51,9 +61,7 @@ void CrossChannelNormLayer::forward(PassType passType) {
...
@@ -51,9 +61,7 @@ void CrossChannelNormLayer::forward(PassType passType) {
Matrix
::
resizeOrCreate
(
dataBuffer_
,
batchSize
,
dataDim
,
false
,
useGpu_
);
Matrix
::
resizeOrCreate
(
dataBuffer_
,
batchSize
,
dataDim
,
false
,
useGpu_
);
Matrix
::
resizeOrCreate
(
spatialBuffer_
,
1
,
spatialDim
,
false
,
useGpu_
);
Matrix
::
resizeOrCreate
(
spatialBuffer_
,
1
,
spatialDim
,
false
,
useGpu_
);
Matrix
::
resizeOrCreate
(
normBuffer_
,
batchSize
,
spatialDim
,
false
,
useGpu_
);
Matrix
::
resizeOrCreate
(
normBuffer_
,
batchSize
,
spatialDim
,
false
,
useGpu_
);
normBuffer_
->
zeroMem
();
// add eps to avoid overflow
normBuffer_
->
addScalar
(
*
normBuffer_
,
1e-6
);
inV
->
square2
(
*
dataBuffer_
);
inV
->
square2
(
*
dataBuffer_
);
for
(
size_t
i
=
0
;
i
<
batchSize
;
i
++
)
{
for
(
size_t
i
=
0
;
i
<
batchSize
;
i
++
)
{
const
MatrixPtr
inVTmp
=
createSampleMatrix
(
inV
,
i
,
spatialDim
);
const
MatrixPtr
inVTmp
=
createSampleMatrix
(
inV
,
i
,
spatialDim
);
...
@@ -63,6 +71,8 @@ void CrossChannelNormLayer::forward(PassType passType) {
...
@@ -63,6 +71,8 @@ void CrossChannelNormLayer::forward(PassType passType) {
// compute norm.
// compute norm.
spatialBuffer_
->
sumCols
(
*
dataTmp
,
1
,
0
);
spatialBuffer_
->
sumCols
(
*
dataTmp
,
1
,
0
);
// add eps to avoid overflow
spatialBuffer_
->
add
(
1e-6
);
spatialBuffer_
->
sqrt2
(
*
spatialBuffer_
);
spatialBuffer_
->
sqrt2
(
*
spatialBuffer_
);
normTmp
->
copyFrom
(
*
spatialBuffer_
);
normTmp
->
copyFrom
(
*
spatialBuffer_
);
outVTmp
->
copyFrom
(
*
inVTmp
);
outVTmp
->
copyFrom
(
*
inVTmp
);
...
@@ -82,6 +92,9 @@ void CrossChannelNormLayer::backward(const UpdateCallback& callback) {
...
@@ -82,6 +92,9 @@ void CrossChannelNormLayer::backward(const UpdateCallback& callback) {
size_t
dataDim
=
inG
->
getWidth
();
size_t
dataDim
=
inG
->
getWidth
();
size_t
spatialDim
=
dataDim
/
channels_
;
size_t
spatialDim
=
dataDim
/
channels_
;
MatrixPtr
inGBuffer
;
Matrix
::
resizeOrCreate
(
inGBuffer
,
channels_
,
spatialDim
,
false
,
useGpu_
);
dataBuffer_
->
dotMul
(
*
outG
,
*
outV
);
dataBuffer_
->
dotMul
(
*
outG
,
*
outV
);
Matrix
::
resizeOrCreate
(
scaleDiff_
,
channels_
,
1
,
false
,
useGpu_
);
Matrix
::
resizeOrCreate
(
scaleDiff_
,
channels_
,
1
,
false
,
useGpu_
);
Matrix
::
resizeOrCreate
(
channelBuffer_
,
channels_
,
1
,
false
,
useGpu_
);
Matrix
::
resizeOrCreate
(
channelBuffer_
,
channels_
,
1
,
false
,
useGpu_
);
...
@@ -100,22 +113,24 @@ void CrossChannelNormLayer::backward(const UpdateCallback& callback) {
...
@@ -100,22 +113,24 @@ void CrossChannelNormLayer::backward(const UpdateCallback& callback) {
scaleDiff_
->
add
(
*
channelBuffer_
,
1.
);
scaleDiff_
->
add
(
*
channelBuffer_
,
1.
);
sampleBuffer_
->
dotMul
(
*
inVTmp
,
*
outGTmp
);
sampleBuffer_
->
dotMul
(
*
inVTmp
,
*
outGTmp
);
spatialBuffer_
->
sumCols
(
*
sampleBuffer_
,
1.
,
1
.
);
spatialBuffer_
->
sumCols
(
*
sampleBuffer_
,
1.
,
0
.
);
// scale the grad
// scale the grad
inG
Tmp
->
copyFrom
(
*
inVTmp
);
inG
Buffer
->
copyFrom
(
*
inVTmp
);
inG
Tmp
->
mulRowVector
(
*
spatialBuffer_
);
inG
Buffer
->
mulRowVector
(
*
spatialBuffer_
);
// divide by square of norm
// divide by square of norm
spatialBuffer_
->
dotMul
(
*
normTmp
,
*
normTmp
);
spatialBuffer_
->
dotMul
(
*
normTmp
,
*
normTmp
);
inG
Tmp
->
divRowVector
(
*
spatialBuffer_
);
inG
Buffer
->
divRowVector
(
*
spatialBuffer_
);
// subtract
// subtract
inG
Tmp
->
add
(
*
outGTmp
,
-
1
,
1
);
inG
Buffer
->
add
(
*
outGTmp
,
-
1
,
1
);
// divide by norm
// divide by norm
inG
Tmp
->
divRowVector
(
*
normTmp
);
inG
Buffer
->
divRowVector
(
*
normTmp
);
// scale the diff
// scale the diff
inGTmp
->
mulColVector
(
*
scale_
->
getW
());
inGBuffer
->
mulColVector
(
*
scale_
->
getW
());
inGTmp
->
add
(
*
inGBuffer
);
}
}
// updata scale
// updata scale
if
(
scale_
->
getWGrad
())
scale_
->
getWGrad
()
->
copyFrom
(
*
scaleDiff_
);
if
(
scale_
->
getWGrad
())
scale_
->
getWGrad
()
->
add
(
*
scaleDiff_
);
scale_
->
getParameterPtr
()
->
incUpdate
(
callback
);
scale_
->
getParameterPtr
()
->
incUpdate
(
callback
);
}
}
...
...
paddle/gserver/layers/MaxLayer.h
浏览文件 @
3aa67981
...
@@ -26,6 +26,10 @@ namespace paddle {
...
@@ -26,6 +26,10 @@ namespace paddle {
* If SequenceLevel = kNonSeq:
* If SequenceLevel = kNonSeq:
* Output: output size is the number of input sequences (NOT input instances)
* Output: output size is the number of input sequences (NOT input instances)
* output[i] = max_{for each instance in this sequence}{input[i]}
* output[i] = max_{for each instance in this sequence}{input[i]}
* If stride_ > 0:
* Output: a shorten sequence. Stride is the step size by which we slide a
* window upon the input sequence, and the max pooling operation is
* then applied to each interval independently.
* If SequenceLevel = kSeq:
* If SequenceLevel = kSeq:
* Check input sequence must has sub-sequence
* Check input sequence must has sub-sequence
* Output: output size is the number of input sub-sequences
* Output: output size is the number of input sub-sequences
...
...
paddle/gserver/layers/NormLayer.cpp
浏览文件 @
3aa67981
...
@@ -56,14 +56,4 @@ bool ResponseNormLayer::init(const LayerMap& layerMap,
...
@@ -56,14 +56,4 @@ bool ResponseNormLayer::init(const LayerMap& layerMap,
return
true
;
return
true
;
}
}
bool
CrossChannelNormLayer
::
init
(
const
LayerMap
&
layerMap
,
const
ParameterMap
&
parameterMap
)
{
Layer
::
init
(
layerMap
,
parameterMap
);
CHECK
(
parameters_
[
0
]);
const
NormConfig
&
conf
=
config_
.
inputs
(
0
).
norm_conf
();
channels_
=
conf
.
channels
();
scale_
.
reset
(
new
Weight
(
channels_
,
1
,
parameters_
[
0
]));
return
true
;
}
}
// namespace paddle
}
// namespace paddle
paddle/gserver/layers/SequenceLastInstanceLayer.cpp
浏览文件 @
3aa67981
...
@@ -26,10 +26,9 @@ namespace paddle {
...
@@ -26,10 +26,9 @@ namespace paddle {
* If SequenceLevel = kNonseq:
* If SequenceLevel = kNonseq:
* Output: a sequence containing only the last instance of the input sequence
* Output: a sequence containing only the last instance of the input sequence
* If stride_ > 0:
* If stride_ > 0:
* Output: a shorten sequence. The operation of getting last instance of a
* Output: a shorten sequence. Stride is the step size by which we slide a
* sequence is independently performed on every slice of the input
* window upon the input sequence, and getting last instance
* sequence, which is obtained by sliding a window with the window
* operation is then applied to each interval independently.
* size set to stride_.
* If SequenceLevel = kSeq:
* If SequenceLevel = kSeq:
* Check input sequence must has sub-sequence
* Check input sequence must has sub-sequence
* Output: a sequence containing only the last instance of each sub-sequence
* Output: a sequence containing only the last instance of each sub-sequence
...
@@ -73,8 +72,7 @@ bool SequenceLastInstanceLayer::init(const LayerMap& layerMap,
...
@@ -73,8 +72,7 @@ bool SequenceLastInstanceLayer::init(const LayerMap& layerMap,
void
SequenceLastInstanceLayer
::
forward
(
PassType
passType
)
{
void
SequenceLastInstanceLayer
::
forward
(
PassType
passType
)
{
SequencePoolLayer
::
forward
(
passType
);
SequencePoolLayer
::
forward
(
passType
);
auto
starts
=
(
stride_
>
0
)
?
stridePositions_
->
getData
()
auto
starts
=
startPositions_
->
getData
(
false
);
:
startPositions_
->
getData
(
false
);
MatrixPtr
inputValue
=
getInputValue
(
0
);
MatrixPtr
inputValue
=
getInputValue
(
0
);
MatrixPtr
outputValue
=
getOutputValue
();
MatrixPtr
outputValue
=
getOutputValue
();
...
...
paddle/gserver/layers/SequencePoolLayer.cpp
浏览文件 @
3aa67981
...
@@ -72,9 +72,8 @@ void SequencePoolLayer::forward(PassType passType) {
...
@@ -72,9 +72,8 @@ void SequencePoolLayer::forward(PassType passType) {
if
(
stride_
>
0
)
{
if
(
stride_
>
0
)
{
CHECK_EQ
(
input
.
hasSubseq
(),
0UL
)
CHECK_EQ
(
input
.
hasSubseq
(),
0UL
)
<<
"sequence stride pooling is invalid for hasSubseq now"
;
<<
"sequence stride pooling is invalid for hasSubseq now"
;
output_
.
poolSequenceWithStride
(
output_
.
poolSequenceWithStride
(
input
,
stride_
,
&
startPositions_
,
reversed_
);
input
,
stride_
,
&
stridePositions_
,
reversed_
);
newBatchSize_
=
startPositions_
->
getSize
()
-
1
;
newBatchSize_
=
stridePositions_
->
getSize
()
-
1
;
}
}
resetOutput
(
newBatchSize_
,
dim
);
resetOutput
(
newBatchSize_
,
dim
);
...
...
paddle/gserver/layers/SequencePoolLayer.h
浏览文件 @
3aa67981
...
@@ -28,8 +28,9 @@ namespace paddle {
...
@@ -28,8 +28,9 @@ namespace paddle {
* sequence}{input[i]}
* sequence}{input[i]}
* If stride_ > 0:
* If stride_ > 0:
* Check input sequence must not have sub-sequence
* Check input sequence must not have sub-sequence
* Output: a shorten sequence, pooling is performed upon a small local
* Output: a shorten sequence. Stride is the step size by which we slide
* area
* a window upon the input sequence, and the pooling operation
* is then applied to each interval independently.
* If SequenceLevel = kSeq:
* If SequenceLevel = kSeq:
* Check input sequence must has sub-sequence
* Check input sequence must has sub-sequence
* Output: output size is the number of input sub-sequences
* Output: output size is the number of input sub-sequences
...
@@ -47,8 +48,6 @@ protected:
...
@@ -47,8 +48,6 @@ protected:
size_t
newBatchSize_
;
size_t
newBatchSize_
;
ICpuGpuVectorPtr
startPositions_
;
ICpuGpuVectorPtr
startPositions_
;
int
stride_
;
int
stride_
;
// Store the start position of each window.
IVectorPtr
stridePositions_
;
// Whether the input sequence is reversed or not.
// Whether the input sequence is reversed or not.
bool
reversed_
=
false
;
bool
reversed_
=
false
;
...
...
paddle/gserver/tests/LayerGradUtil.cpp
浏览文件 @
3aa67981
...
@@ -241,7 +241,7 @@ void testBatchState(LayerPtr testLayer,
...
@@ -241,7 +241,7 @@ void testBatchState(LayerPtr testLayer,
std
::
vector
<
Argument
>
args
;
std
::
vector
<
Argument
>
args
;
args
.
push_back
(
out
);
args
.
push_back
(
out
);
EXPECT_EQ
(
0
,
Argument
::
sum
(
args
)
)
<<
"testBatchState failed"
;
ASSERT_NEAR
(
0
,
Argument
::
sum
(
args
),
1e-5
)
<<
"testBatchState failed"
;
for
(
size_t
seqId
=
0
;
seqId
<
numSequences
;
++
seqId
)
{
for
(
size_t
seqId
=
0
;
seqId
<
numSequences
;
++
seqId
)
{
start
[
seqId
]
+=
seqLens
[
seqId
];
start
[
seqId
]
+=
seqLens
[
seqId
];
}
}
...
@@ -465,7 +465,6 @@ void initTestLayer(TestConfig testConf,
...
@@ -465,7 +465,6 @@ void initTestLayer(TestConfig testConf,
ParameterConfig
paraConfig
)
{
ParameterConfig
paraConfig
)
{
paraConfig
.
set_name
(
paraName
);
paraConfig
.
set_name
(
paraName
);
paraConfig
.
set_size
(
paraSize
);
paraConfig
.
set_size
(
paraSize
);
paraConfig
.
set_initial_std
(
1
);
paraConfig
.
set_is_static
(
isStatic
);
paraConfig
.
set_is_static
(
isStatic
);
auto
para
=
auto
para
=
std
::
make_shared
<
Parameter
>
(
paraConfig
,
FLAGS_use_gpu
,
initialize
);
std
::
make_shared
<
Parameter
>
(
paraConfig
,
FLAGS_use_gpu
,
initialize
);
...
@@ -499,6 +498,9 @@ void initTestLayer(TestConfig testConf,
...
@@ -499,6 +498,9 @@ void initTestLayer(TestConfig testConf,
paraConfig
.
add_dims
((
*
layerMap
)[
input
.
input_layer_name
()]
->
getSize
());
paraConfig
.
add_dims
((
*
layerMap
)[
input
.
input_layer_name
()]
->
getSize
());
paraConfig
.
add_dims
(
testConf
.
layerConfig
.
size
());
paraConfig
.
add_dims
(
testConf
.
layerConfig
.
size
());
}
}
CHECK_GE
(
testConf
.
paramInitialStd
,
0
);
paraConfig
.
set_initial_mean
(
testConf
.
paramInitialMean
);
paraConfig
.
set_initial_std
(
testConf
.
paramInitialStd
);
initParameter
(
paraName
,
paraSize
,
inputDef
.
isStatic
,
false
,
paraConfig
);
initParameter
(
paraName
,
paraSize
,
inputDef
.
isStatic
,
false
,
paraConfig
);
}
}
}
}
...
...
paddle/gserver/tests/LayerGradUtil.h
浏览文件 @
3aa67981
...
@@ -125,12 +125,16 @@ struct TestConfig {
...
@@ -125,12 +125,16 @@ struct TestConfig {
LayerConfig
layerConfig
;
LayerConfig
layerConfig
;
std
::
vector
<
InputDef
>
inputDefs
;
std
::
vector
<
InputDef
>
inputDefs
;
size_t
biasSize
;
size_t
biasSize
;
real
paramInitialMean
;
real
paramInitialStd
;
bool
testAccumulate
;
bool
testAccumulate
;
bool
testState
;
bool
testState
;
bool
staticBias
;
bool
staticBias
;
bool
testBatchState
;
bool
testBatchState
;
TestConfig
()
TestConfig
()
:
biasSize
(
0
),
:
biasSize
(
0
),
paramInitialMean
(
0.0
),
paramInitialStd
(
1.0
),
testAccumulate
(
true
),
testAccumulate
(
true
),
testState
(
false
),
testState
(
false
),
staticBias
(
false
),
staticBias
(
false
),
...
...
paddle/gserver/tests/test_LayerGrad.cpp
浏览文件 @
3aa67981
...
@@ -845,8 +845,12 @@ void testDegradeLayer(bool hasSubseq,
...
@@ -845,8 +845,12 @@ void testDegradeLayer(bool hasSubseq,
TEST
(
Layer
,
MaxLayer
)
{
TEST
(
Layer
,
MaxLayer
)
{
testDegradeLayer
(
false
,
"max"
,
"non-seq"
,
-
1
);
// seq max to non-seq
testDegradeLayer
(
false
,
"max"
,
"non-seq"
,
-
1
);
// seq max to non-seq
testDegradeLayer
(
true
,
"max"
,
"non-seq"
,
-
1
);
// hasSubseq max to non-seq
testDegradeLayer
(
false
,
testDegradeLayer
(
true
,
"max"
,
"seq"
,
-
1
);
// hasSubseq max to seq
"max"
,
"non-seq"
,
5
);
// seq max to a shorten seq, stride window = 5
testDegradeLayer
(
true
,
"max"
,
"non-seq"
,
-
1
);
// hasSubseq max to non-seq
testDegradeLayer
(
true
,
"max"
,
"seq"
,
-
1
);
// hasSubseq max to seq
}
}
TEST
(
Layer
,
SequenceLastInstanceLayer
)
{
TEST
(
Layer
,
SequenceLastInstanceLayer
)
{
...
@@ -868,6 +872,10 @@ TEST(Layer, SequenceLastInstanceLayer) {
...
@@ -868,6 +872,10 @@ TEST(Layer, SequenceLastInstanceLayer) {
TEST
(
Layer
,
AverageLayer
)
{
TEST
(
Layer
,
AverageLayer
)
{
testDegradeLayer
(
false
,
"average"
,
"non-seq"
,
-
1
);
// seq average to non-seq
testDegradeLayer
(
false
,
"average"
,
"non-seq"
,
-
1
);
// seq average to non-seq
testDegradeLayer
(
false
,
"average"
,
"non-seq"
,
5
);
// seq average to a shorten seq, stride window = 5
testDegradeLayer
(
testDegradeLayer
(
true
,
"average"
,
"non-seq"
,
-
1
);
// hasSubseq average to non-seq
true
,
"average"
,
"non-seq"
,
-
1
);
// hasSubseq average to non-seq
testDegradeLayer
(
true
,
"average"
,
"seq"
,
-
1
);
// hasSubseq average to seq
testDegradeLayer
(
true
,
"average"
,
"seq"
,
-
1
);
// hasSubseq average to seq
...
@@ -1661,6 +1669,8 @@ TEST(Layer, PadLayer) {
...
@@ -1661,6 +1669,8 @@ TEST(Layer, PadLayer) {
TEST
(
Layer
,
CrossChannelNormLayer
)
{
TEST
(
Layer
,
CrossChannelNormLayer
)
{
TestConfig
config
;
TestConfig
config
;
config
.
paramInitialMean
=
1.
;
config
.
paramInitialStd
=
0.
;
config
.
layerConfig
.
set_type
(
"norm"
);
config
.
layerConfig
.
set_type
(
"norm"
);
config
.
layerConfig
.
set_size
(
100
);
config
.
layerConfig
.
set_size
(
100
);
LayerInputConfig
*
input
=
config
.
layerConfig
.
add_inputs
();
LayerInputConfig
*
input
=
config
.
layerConfig
.
add_inputs
();
...
@@ -1674,7 +1684,7 @@ TEST(Layer, CrossChannelNormLayer) {
...
@@ -1674,7 +1684,7 @@ TEST(Layer, CrossChannelNormLayer) {
config
.
inputDefs
.
push_back
({
INPUT_DATA
,
"layer_0"
,
100
,
10
});
config
.
inputDefs
.
push_back
({
INPUT_DATA
,
"layer_0"
,
100
,
10
});
for
(
auto
useGpu
:
{
false
,
true
})
{
for
(
auto
useGpu
:
{
false
,
true
})
{
testLayerGrad
(
config
,
"cross-channel-norm"
,
10
,
false
,
useGpu
,
false
,
5
);
testLayerGrad
(
config
,
"cross-channel-norm"
,
10
,
false
,
useGpu
,
false
);
}
}
}
}
...
...
paddle/optimizer/adadelta_optimizer.cc
浏览文件 @
3aa67981
...
@@ -27,22 +27,24 @@ void AdadeltaOptimizer::Update(const Tensor* gradient) {
...
@@ -27,22 +27,24 @@ void AdadeltaOptimizer::Update(const Tensor* gradient) {
const
char
*
AdadeltaOptimizer
::
SerializeState
(
int
*
state_len
)
{
const
char
*
AdadeltaOptimizer
::
SerializeState
(
int
*
state_len
)
{
AdadeltaOptimizerState
state
;
AdadeltaOptimizerState
state
;
// TODO(zhihong) : add lr_policy serialization
state
.
set_num_sample_passed
(
num_sample_passed_
);
state
.
set_num_sample_passed
(
num_sample_passed_
);
std
::
string
lr_str
=
this
->
lr_policy_
->
SerializeState
(
state_len
);
state
.
mutable_lr_state
()
->
ParseFromString
(
lr_str
);
TensorToProto
(
*
parameter_
,
state
.
mutable_parameter
());
TensorToProto
(
*
parameter_
,
state
.
mutable_parameter
());
TensorToProto
(
*
accum_gradient_
,
state
.
mutable_accum_gradient
());
TensorToProto
(
*
accum_gradient_
,
state
.
mutable_accum_gradient
());
TensorToProto
(
*
accum_delta_
,
state
.
mutable_accum_delta
());
TensorToProto
(
*
accum_delta_
,
state
.
mutable_accum_delta
());
TensorToProto
(
*
update_delta_
,
state
.
mutable_update_delta
());
TensorToProto
(
*
update_delta_
,
state
.
mutable_update_delta
());
auto
str
=
state
.
SerializeAsString
();
auto
str
=
state
.
SerializeAsString
();
*
state_len
=
str
.
size
();
*
state_len
+
=
str
.
size
();
return
str
.
c_str
();
return
str
.
c_str
();
}
}
void
AdadeltaOptimizer
::
DeserializeState
(
const
std
::
string
&
str
)
{
void
AdadeltaOptimizer
::
DeserializeState
(
const
std
::
string
&
str
)
{
AdadeltaOptimizerState
state
;
AdadeltaOptimizerState
state
;
state
.
ParseFromString
(
str
);
state
.
ParseFromString
(
str
);
// TODO(zhihong) : add lr_policy DeserializeState
auto
lr_state
=
state
.
lr_state
();
this
->
lr_policy_
->
DeserializeState
(
lr_state
.
SerializeAsString
());
num_sample_passed_
=
state
.
num_sample_passed
();
num_sample_passed_
=
state
.
num_sample_passed
();
ProtoToTensor
(
state
.
parameter
(),
parameter_
);
ProtoToTensor
(
state
.
parameter
(),
parameter_
);
...
...
paddle/optimizer/adagrad_optimizer.cc
浏览文件 @
3aa67981
...
@@ -19,20 +19,23 @@ void AdagradOptimizer::Update(const Tensor* gradient) {
...
@@ -19,20 +19,23 @@ void AdagradOptimizer::Update(const Tensor* gradient) {
}
}
const
char
*
AdagradOptimizer
::
SerializeState
(
int
*
state_len
)
{
const
char
*
AdagradOptimizer
::
SerializeState
(
int
*
state_len
)
{
AdagradOptimizerState
state
;
AdagradOptimizerState
state
;
// TODO(zhihong) : add lr_policy serialization
state
.
set_num_sample_passed
(
num_sample_passed_
);
state
.
set_num_sample_passed
(
num_sample_passed_
);
std
::
string
lr_str
=
this
->
lr_policy_
->
SerializeState
(
state_len
);
state
.
mutable_lr_state
()
->
ParseFromString
(
lr_str
);
TensorToProto
(
*
parameter_
,
state
.
mutable_parameter
());
TensorToProto
(
*
parameter_
,
state
.
mutable_parameter
());
TensorToProto
(
*
accum_gradient_
,
state
.
mutable_accum_gradient
());
TensorToProto
(
*
accum_gradient_
,
state
.
mutable_accum_gradient
());
auto
str
=
state
.
SerializeAsString
();
auto
str
=
state
.
SerializeAsString
();
*
state_len
=
str
.
size
();
*
state_len
+
=
str
.
size
();
return
str
.
c_str
();
return
str
.
c_str
();
}
}
void
AdagradOptimizer
::
DeserializeState
(
const
std
::
string
&
str
)
{
void
AdagradOptimizer
::
DeserializeState
(
const
std
::
string
&
str
)
{
AdagradOptimizerState
state
;
AdagradOptimizerState
state
;
state
.
ParseFromString
(
str
);
state
.
ParseFromString
(
str
);
// TODO(zhihong) : add lr_policy DeserializeState
auto
lr_state
=
state
.
lr_state
();
this
->
lr_policy_
->
DeserializeState
(
lr_state
.
SerializeAsString
());
num_sample_passed_
=
state
.
num_sample_passed
();
num_sample_passed_
=
state
.
num_sample_passed
();
ProtoToTensor
(
state
.
parameter
(),
parameter_
);
ProtoToTensor
(
state
.
parameter
(),
parameter_
);
ProtoToTensor
(
state
.
accum_gradient
(),
accum_gradient_
);
ProtoToTensor
(
state
.
accum_gradient
(),
accum_gradient_
);
...
...
paddle/optimizer/adam_optimizer.cc
浏览文件 @
3aa67981
...
@@ -24,20 +24,23 @@ void AdamOptimizer::Update(const Tensor *gradient) {
...
@@ -24,20 +24,23 @@ void AdamOptimizer::Update(const Tensor *gradient) {
const
char
*
AdamOptimizer
::
SerializeState
(
int
*
state_len
)
{
const
char
*
AdamOptimizer
::
SerializeState
(
int
*
state_len
)
{
AdamOptimizerState
state
;
AdamOptimizerState
state
;
// TODO(zhihong) : add lr_policy serialization
std
::
string
lr_str
=
this
->
lr_policy_
->
SerializeState
(
state_len
);
state
.
mutable_lr_state
()
->
ParseFromString
(
lr_str
);
state
.
set_num_sample_passed
(
num_sample_passed_
);
state
.
set_num_sample_passed
(
num_sample_passed_
);
TensorToProto
(
*
parameter_
,
state
.
mutable_parameter
());
TensorToProto
(
*
parameter_
,
state
.
mutable_parameter
());
TensorToProto
(
*
momentums_
,
state
.
mutable_momentums
());
TensorToProto
(
*
momentums_
,
state
.
mutable_momentums
());
TensorToProto
(
*
velocitys_
,
state
.
mutable_velocitys
());
TensorToProto
(
*
velocitys_
,
state
.
mutable_velocitys
());
auto
str
=
state
.
SerializeAsString
();
auto
str
=
state
.
SerializeAsString
();
*
state_len
=
str
.
size
();
*
state_len
+
=
str
.
size
();
return
str
.
c_str
();
return
str
.
c_str
();
}
}
void
AdamOptimizer
::
DeserializeState
(
const
std
::
string
&
str
)
{
void
AdamOptimizer
::
DeserializeState
(
const
std
::
string
&
str
)
{
AdamOptimizerState
state
;
AdamOptimizerState
state
;
state
.
ParseFromString
(
str
);
state
.
ParseFromString
(
str
);
// TODO(zhihong) : add lr_policy DeserializeState
auto
lr_state
=
state
.
lr_state
();
this
->
lr_policy_
->
DeserializeState
(
lr_state
.
SerializeAsString
());
num_sample_passed_
=
state
.
num_sample_passed
();
num_sample_passed_
=
state
.
num_sample_passed
();
ProtoToTensor
(
state
.
parameter
(),
parameter_
);
ProtoToTensor
(
state
.
parameter
(),
parameter_
);
...
...
paddle/optimizer/lr_policy.h
浏览文件 @
3aa67981
...
@@ -17,36 +17,56 @@ public:
...
@@ -17,36 +17,56 @@ public:
// constant learning rate policy
// constant learning rate policy
class
ConstLr
final
:
public
LrPolicy
{
class
ConstLr
final
:
public
LrPolicy
{
public:
public:
ConstLr
(
double
lr
)
:
learning_rate
(
lr
){};
ConstLr
(
double
lr
)
:
learning_rate
_
(
lr
){};
double
LearningRate
(
const
uint64_t
num_sample_passed
)
{
double
LearningRate
(
const
uint64_t
num_sample_passed
)
{
return
learning_rate
;
return
learning_rate_
;
}
const
char
*
SerializeState
(
int
*
state_len
)
{
LrPolicyState
state
;
state
.
set_learning_rate
(
learning_rate_
);
auto
str
=
state
.
SerializeAsString
();
*
state_len
=
str
.
size
();
return
str
.
c_str
();
}
void
DeserializeState
(
const
std
::
string
&
str
)
{
LrPolicyState
state
;
state
.
ParseFromString
(
str
);
learning_rate_
=
state
.
learning_rate
();
}
}
const
char
*
SerializeState
(
int
*
state_len
)
{
return
nullptr
;
}
void
DeserializeState
(
const
std
::
string
&
state
)
{}
private:
private:
double
learning_rate
;
double
learning_rate
_
;
};
};
class
LinearLr
final
:
public
LrPolicy
{
class
LinearLr
final
:
public
LrPolicy
{
public:
public:
LinearLr
(
double
lr
,
double
lr_decay_a
,
double
lr_decay_b
)
LinearLr
(
double
lr
,
double
lr_decay_a
,
double
lr_decay_b
)
:
learning_rate
(
lr
),
lr_decay_a
(
lr_decay_a
),
lr_decay_b
(
lr_decay_b
)
{}
:
learning_rate
_
(
lr
),
lr_decay_a_
(
lr_decay_a
),
lr_decay_b_
(
lr_decay_b
)
{}
double
LearningRate
(
const
uint64_t
num_sample_passed
)
{
double
LearningRate
(
const
uint64_t
num_sample_passed
)
{
return
std
::
max
(
learning_rate
-
lr_decay_a
*
num_sample_passed
,
lr_decay_b
);
return
std
::
max
(
learning_rate_
-
lr_decay_a_
*
num_sample_passed
,
lr_decay_b_
);
}
}
const
char
*
SerializeState
(
int
*
state_len
)
{
const
char
*
SerializeState
(
int
*
state_len
)
{
// TODO(zhihong) : add lr_policy serialization
LrPolicyState
state
;
return
nullptr
;
state
.
set_learning_rate
(
learning_rate_
);
state
.
set_lr_decay_a
(
lr_decay_a_
);
state
.
set_lr_decay_b
(
lr_decay_b_
);
auto
str
=
state
.
SerializeAsString
();
*
state_len
=
str
.
size
();
return
str
.
c_str
();
}
}
void
DeserializeState
(
const
std
::
string
&
state
)
{
void
DeserializeState
(
const
std
::
string
&
str
)
{
// TODO(zhihong) : add lr_policy serialization
LrPolicyState
state
;
state
.
ParseFromString
(
str
);
learning_rate_
=
state
.
learning_rate
();
lr_decay_a_
=
state
.
lr_decay_a
();
lr_decay_b_
=
state
.
lr_decay_b
();
}
}
private:
private:
double
learning_rate
;
double
learning_rate
_
;
double
lr_decay_a
;
double
lr_decay_a
_
;
double
lr_decay_b
;
double
lr_decay_b
_
;
};
};
}
// namespace optimizer
}
// namespace optimizer
...
...
paddle/optimizer/sgd_optimizer.cc
浏览文件 @
3aa67981
...
@@ -30,16 +30,20 @@ void SGDOptimizer::Update(const Tensor *gradient) {
...
@@ -30,16 +30,20 @@ void SGDOptimizer::Update(const Tensor *gradient) {
const
char
*
SGDOptimizer
::
SerializeState
(
int
*
state_len
)
{
const
char
*
SGDOptimizer
::
SerializeState
(
int
*
state_len
)
{
SGDOptimizerState
state
;
SGDOptimizerState
state
;
state
.
set_num_sample_passed
(
num_sample_passed_
);
state
.
set_num_sample_passed
(
num_sample_passed_
);
std
::
string
lr_str
=
this
->
lr_policy_
->
SerializeState
(
state_len
);
state
.
mutable_lr_state
()
->
ParseFromString
(
lr_str
);
TensorToProto
(
*
parameter_
,
state
.
mutable_parameter
());
TensorToProto
(
*
parameter_
,
state
.
mutable_parameter
());
if
(
momentum_
!=
0.0
)
TensorToProto
(
*
momentums_
,
state
.
mutable_momentums
());
if
(
momentum_
!=
0.0
)
TensorToProto
(
*
momentums_
,
state
.
mutable_momentums
());
auto
str
=
state
.
SerializeAsString
();
auto
str
=
state
.
SerializeAsString
();
*
state_len
=
str
.
size
();
*
state_len
+
=
str
.
size
();
return
str
.
c_str
();
return
str
.
c_str
();
}
}
void
SGDOptimizer
::
DeserializeState
(
const
std
::
string
&
str
)
{
void
SGDOptimizer
::
DeserializeState
(
const
std
::
string
&
str
)
{
SGDOptimizerState
state
;
SGDOptimizerState
state
;
state
.
ParseFromString
(
str
);
state
.
ParseFromString
(
str
);
auto
lr_state
=
state
.
lr_state
();
this
->
lr_policy_
->
DeserializeState
(
lr_state
.
SerializeAsString
());
num_sample_passed_
=
state
.
num_sample_passed
();
num_sample_passed_
=
state
.
num_sample_passed
();
ProtoToTensor
(
state
.
parameter
(),
parameter_
);
ProtoToTensor
(
state
.
parameter
(),
parameter_
);
if
(
momentum_
!=
0.0
)
ProtoToTensor
(
state
.
parameter
(),
momentums_
);
if
(
momentum_
!=
0.0
)
ProtoToTensor
(
state
.
parameter
(),
momentums_
);
...
...
paddle/parameter/Argument.cpp
浏览文件 @
3aa67981
...
@@ -561,7 +561,7 @@ void Argument::degradeSequence(const Argument& input) {
...
@@ -561,7 +561,7 @@ void Argument::degradeSequence(const Argument& input) {
void
Argument
::
poolSequenceWithStride
(
const
Argument
&
input
,
void
Argument
::
poolSequenceWithStride
(
const
Argument
&
input
,
size_t
stride
,
size_t
stride
,
IVectorPtr
*
stridePostions
,
I
CpuGpu
VectorPtr
*
stridePostions
,
bool
reversed
)
{
bool
reversed
)
{
// If input.sequenceStartPositions = [0, 9, 14, 17, 30] and stride = 5,
// If input.sequenceStartPositions = [0, 9, 14, 17, 30] and stride = 5,
// then sequenceStartPositions = [0, 2, 3, 4, 7].
// then sequenceStartPositions = [0, 2, 3, 4, 7].
...
@@ -598,8 +598,8 @@ void Argument::poolSequenceWithStride(const Argument& input,
...
@@ -598,8 +598,8 @@ void Argument::poolSequenceWithStride(const Argument& input,
stridePos
.
emplace_back
(
starts
[
numSequences
]);
stridePos
.
emplace_back
(
starts
[
numSequences
]);
int
size
=
stridePos
.
size
();
int
size
=
stridePos
.
size
();
CHECK_EQ
(
size
-
1
,
tgtBuf
[
numSequences
]);
CHECK_EQ
(
size
-
1
,
tgtBuf
[
numSequences
]);
IVector
::
resizeOrCreate
(
*
stridePostions
,
size
,
false
);
I
CpuGpu
Vector
::
resizeOrCreate
(
*
stridePostions
,
size
,
false
);
(
*
stridePostions
)
->
copyFrom
(
stridePos
.
data
(),
size
);
(
*
stridePostions
)
->
getMutableVector
(
false
)
->
copyFrom
(
stridePos
.
data
(),
size
);
}
}
void
Argument
::
getValueString
(
void
Argument
::
getValueString
(
...
...
paddle/parameter/Argument.h
浏览文件 @
3aa67981
...
@@ -299,7 +299,7 @@ struct Argument {
...
@@ -299,7 +299,7 @@ struct Argument {
*/
*/
void
poolSequenceWithStride
(
const
Argument
&
input
,
void
poolSequenceWithStride
(
const
Argument
&
input
,
size_t
stride
,
size_t
stride
,
IVectorPtr
*
stridePositions
,
I
CpuGpu
VectorPtr
*
stridePositions
,
bool
reversed
=
false
);
bool
reversed
=
false
);
/**
/**
* @brief getValueString will return the argument's output in string. There
* @brief getValueString will return the argument's output in string. There
...
...
paddle/parameter/tests/test_argument.cpp
浏览文件 @
3aa67981
...
@@ -31,7 +31,7 @@ TEST(Argument, poolSequenceWithStride) {
...
@@ -31,7 +31,7 @@ TEST(Argument, poolSequenceWithStride) {
int
strideResultReversed
[]
=
{
0
,
4
,
9
,
14
,
17
,
20
,
25
,
30
};
int
strideResultReversed
[]
=
{
0
,
4
,
9
,
14
,
17
,
20
,
25
,
30
};
for
(
auto
reversed
:
{
false
,
true
})
{
for
(
auto
reversed
:
{
false
,
true
})
{
IVectorPtr
stridePositions
;
I
CpuGpu
VectorPtr
stridePositions
;
output
.
poolSequenceWithStride
(
output
.
poolSequenceWithStride
(
input
,
5
/* stride */
,
&
stridePositions
,
reversed
);
input
,
5
/* stride */
,
&
stridePositions
,
reversed
);
...
@@ -45,7 +45,7 @@ TEST(Argument, poolSequenceWithStride) {
...
@@ -45,7 +45,7 @@ TEST(Argument, poolSequenceWithStride) {
CHECK_EQ
(
stridePositions
->
getSize
(),
8UL
);
CHECK_EQ
(
stridePositions
->
getSize
(),
8UL
);
auto
result
=
reversed
?
strideResultReversed
:
strideResult
;
auto
result
=
reversed
?
strideResultReversed
:
strideResult
;
for
(
int
i
=
0
;
i
<
8
;
i
++
)
{
for
(
int
i
=
0
;
i
<
8
;
i
++
)
{
CHECK_EQ
(
stridePositions
->
getData
()[
i
],
result
[
i
]);
CHECK_EQ
(
stridePositions
->
getData
(
false
)[
i
],
result
[
i
]);
}
}
}
}
}
}
...
...
paddle/pserver/LightNetwork.cpp
浏览文件 @
3aa67981
...
@@ -142,7 +142,7 @@ SocketServer::SocketServer(const std::string &addr, int port, int rdmaCpu)
...
@@ -142,7 +142,7 @@ SocketServer::SocketServer(const std::string &addr, int port, int rdmaCpu)
}
}
/// trigger to initialize RDMA lib
/// trigger to initialize RDMA lib
P
CHECK
(
RdmaClientDaemons
::
get
())
<<
"initilizate RDMA failed
\n
"
;
CHECK
(
RdmaClientDaemons
::
get
())
<<
"initilizate RDMA failed
\n
"
;
}
}
SocketServer
::~
SocketServer
()
{
SocketServer
::~
SocketServer
()
{
...
@@ -168,7 +168,7 @@ void SocketServer::tcpServer() {
...
@@ -168,7 +168,7 @@ void SocketServer::tcpServer() {
/// First call to socket() function
/// First call to socket() function
socket_
=
socket
(
AF_INET
,
SOCK_STREAM
,
0
);
socket_
=
socket
(
AF_INET
,
SOCK_STREAM
,
0
);
P
CHECK
(
socket_
>=
0
)
<<
"ERROR opening socket"
;
CHECK
(
socket_
>=
0
)
<<
"ERROR opening socket"
;
/// Initialize socket structure
/// Initialize socket structure
bzero
((
char
*
)
&
serv_addr
,
sizeof
(
serv_addr
));
bzero
((
char
*
)
&
serv_addr
,
sizeof
(
serv_addr
));
...
@@ -176,7 +176,7 @@ void SocketServer::tcpServer() {
...
@@ -176,7 +176,7 @@ void SocketServer::tcpServer() {
serv_addr
.
sin_port
=
htons
(
port_
);
serv_addr
.
sin_port
=
htons
(
port_
);
if
(
!
addr_
.
empty
())
{
if
(
!
addr_
.
empty
())
{
server
=
gethostbyname
(
addr_
.
c_str
());
server
=
gethostbyname
(
addr_
.
c_str
());
P
CHECK
(
server
)
<<
"ERROR, no such host: "
<<
addr_
;
CHECK
(
server
)
<<
"ERROR, no such host: "
<<
addr_
;
bcopy
((
char
*
)
server
->
h_addr
,
bcopy
((
char
*
)
server
->
h_addr
,
(
char
*
)
&
serv_addr
.
sin_addr
.
s_addr
,
(
char
*
)
&
serv_addr
.
sin_addr
.
s_addr
,
server
->
h_length
);
server
->
h_length
);
...
@@ -187,7 +187,7 @@ void SocketServer::tcpServer() {
...
@@ -187,7 +187,7 @@ void SocketServer::tcpServer() {
setOption
(
socket_
);
setOption
(
socket_
);
/// Now bind the host address using bind() call.
/// Now bind the host address using bind() call.
P
CHECK
(
bind
(
socket_
,
(
struct
sockaddr
*
)
&
serv_addr
,
sizeof
(
serv_addr
))
>=
0
)
CHECK
(
bind
(
socket_
,
(
struct
sockaddr
*
)
&
serv_addr
,
sizeof
(
serv_addr
))
>=
0
)
<<
"ERROR on binding "
<<
addr_
;
<<
"ERROR on binding "
<<
addr_
;
/// Now start listening for the clients, here process will
/// Now start listening for the clients, here process will
...
@@ -201,7 +201,7 @@ void SocketServer::tcpServer() {
...
@@ -201,7 +201,7 @@ void SocketServer::tcpServer() {
if
(
stopping_
)
{
if
(
stopping_
)
{
break
;
break
;
}
}
P
CHECK
(
newsockfd
>=
0
)
<<
"ERROR on accept"
;
CHECK
(
newsockfd
>=
0
)
<<
"ERROR on accept"
;
constexpr
int
kPeerNameLen
=
128
;
constexpr
int
kPeerNameLen
=
128
;
char
peerName
[
kPeerNameLen
];
char
peerName
[
kPeerNameLen
];
CHECK
(
inet_ntop
(
AF_INET
,
&
cli_addr
.
sin_addr
,
peerName
,
kPeerNameLen
));
CHECK
(
inet_ntop
(
AF_INET
,
&
cli_addr
.
sin_addr
,
peerName
,
kPeerNameLen
));
...
@@ -227,14 +227,14 @@ void SocketServer::rdmaServer() {
...
@@ -227,14 +227,14 @@ void SocketServer::rdmaServer() {
/// First call to socket() function
/// First call to socket() function
rdmaSocket_
=
rdma
::
ssocket
(
rdmaCpu_
);
rdmaSocket_
=
rdma
::
ssocket
(
rdmaCpu_
);
P
CHECK
(
rdmaSocket_
)
<<
"ERROR opening RDMA socket"
;
CHECK
(
rdmaSocket_
)
<<
"ERROR opening RDMA socket"
;
P
CHECK
(
rdma
::
bind
(
rdmaSocket_
,
rdmaUri_
.
c_str
())
==
0
)
CHECK
(
rdma
::
bind
(
rdmaSocket_
,
rdmaUri_
.
c_str
())
==
0
)
<<
"ERROR bind RDMA socket"
;
<<
"ERROR bind RDMA socket"
;
/// Now start listening for the clients, here process will
/// Now start listening for the clients, here process will
/// go in sleep mode and will wait for the incoming connection
/// go in sleep mode and will wait for the incoming connection
P
CHECK
(
rdma
::
listen
(
rdmaSocket_
)
==
0
)
<<
"ERROR listen RDMA socket"
;
CHECK
(
rdma
::
listen
(
rdmaSocket_
)
==
0
)
<<
"ERROR listen RDMA socket"
;
while
(
true
)
{
while
(
true
)
{
/// Accept actual connection from the client
/// Accept actual connection from the client
...
@@ -242,7 +242,7 @@ void SocketServer::rdmaServer() {
...
@@ -242,7 +242,7 @@ void SocketServer::rdmaServer() {
if
(
stopping_
)
{
if
(
stopping_
)
{
break
;
break
;
}
}
P
CHECK
(
newsock
)
<<
"ERROR on accept"
;
CHECK
(
newsock
)
<<
"ERROR on accept"
;
constexpr
int
kPeerNameLen
=
128
;
constexpr
int
kPeerNameLen
=
128
;
char
peerName
[
kPeerNameLen
];
char
peerName
[
kPeerNameLen
];
...
@@ -290,7 +290,7 @@ RdmaClientDaemons::RdmaClientDaemons() {
...
@@ -290,7 +290,7 @@ RdmaClientDaemons::RdmaClientDaemons() {
onlineCpus_
=
rdma
::
numCpus
();
onlineCpus_
=
rdma
::
numCpus
();
for
(
auto
i
=
0
;
i
<
onlineCpus_
;
i
++
)
{
for
(
auto
i
=
0
;
i
<
onlineCpus_
;
i
++
)
{
socket
=
rdma
::
csocket
(
i
);
socket
=
rdma
::
csocket
(
i
);
P
CHECK
(
socket
)
<<
"ERROR open client socket daemon"
;
CHECK
(
socket
)
<<
"ERROR open client socket daemon"
;
rdmaClientSocket_
.
push_back
(
socket
);
rdmaClientSocket_
.
push_back
(
socket
);
}
}
...
@@ -355,7 +355,7 @@ void SocketClient::TcpClient(const std::string &serverAddr, int serverPort) {
...
@@ -355,7 +355,7 @@ void SocketClient::TcpClient(const std::string &serverAddr, int serverPort) {
/// Create a socket point
/// Create a socket point
int
sockfd
=
socket
(
AF_INET
,
SOCK_STREAM
,
0
);
int
sockfd
=
socket
(
AF_INET
,
SOCK_STREAM
,
0
);
P
CHECK
(
sockfd
>=
0
)
<<
"ERROR opening socket"
;
CHECK
(
sockfd
>=
0
)
<<
"ERROR opening socket"
;
#if defined(__OSX__) || defined(__APPLE__)
#if defined(__OSX__) || defined(__APPLE__)
server
=
getipnodebyname
(
serverAddr
.
c_str
(),
AF_INET
,
AI_DEFAULT
,
&
errRet
);
server
=
getipnodebyname
(
serverAddr
.
c_str
(),
AF_INET
,
AI_DEFAULT
,
&
errRet
);
...
@@ -396,8 +396,8 @@ void SocketClient::TcpClient(const std::string &serverAddr, int serverPort) {
...
@@ -396,8 +396,8 @@ void SocketClient::TcpClient(const std::string &serverAddr, int serverPort) {
}
}
std
::
this_thread
::
sleep_for
(
std
::
chrono
::
seconds
(
1
));
std
::
this_thread
::
sleep_for
(
std
::
chrono
::
seconds
(
1
));
}
else
{
}
else
{
P
CHECK
(
errno
!=
0
)
<<
"ERROR connecting to "
<<
serverAddr
<<
":"
CHECK
(
errno
!=
0
)
<<
"ERROR connecting to "
<<
serverAddr
<<
":"
<<
serverPort
<<
"errorno: "
<<
errno
;
<<
serverPort
<<
"errorno: "
<<
errno
;
}
}
}
while
(
errno
==
ECONNREFUSED
);
}
while
(
errno
==
ECONNREFUSED
);
...
@@ -426,7 +426,7 @@ void SocketClient::RdmaClient(const std::string &serverAddr, int serverPort) {
...
@@ -426,7 +426,7 @@ void SocketClient::RdmaClient(const std::string &serverAddr, int serverPort) {
/// connect to server with socket daemon
/// connect to server with socket daemon
sock
=
rdma
::
connect
(
socketDaemon_
,
rdmaUri
.
c_str
());
sock
=
rdma
::
connect
(
socketDaemon_
,
rdmaUri
.
c_str
());
P
CHECK
(
sock
)
<<
"ERROR connect to server"
<<
rdmaUri
;
CHECK
(
sock
)
<<
"ERROR connect to server"
<<
rdmaUri
;
std
::
vector
<
std
::
string
>
seg
;
std
::
vector
<
std
::
string
>
seg
;
str
::
split
(
rdmaUri
,
'/'
,
&
seg
);
str
::
split
(
rdmaUri
,
'/'
,
&
seg
);
...
...
paddle/pserver/SocketChannel.cpp
浏览文件 @
3aa67981
...
@@ -51,7 +51,7 @@ size_t SocketChannel::read(void* buf, size_t size) {
...
@@ -51,7 +51,7 @@ size_t SocketChannel::read(void* buf, size_t size) {
else
else
len
=
rdma
::
read
(
rdmaSocket_
,
(
char
*
)
buf
+
total
,
size
-
total
);
len
=
rdma
::
read
(
rdmaSocket_
,
(
char
*
)
buf
+
total
,
size
-
total
);
P
CHECK
(
len
>=
0
)
<<
" peer="
<<
peerName_
;
CHECK
(
len
>=
0
)
<<
" peer="
<<
peerName_
;
if
(
len
<=
0
)
{
if
(
len
<=
0
)
{
return
total
;
return
total
;
}
}
...
@@ -69,7 +69,7 @@ size_t SocketChannel::write(const void* buf, size_t size) {
...
@@ -69,7 +69,7 @@ size_t SocketChannel::write(const void* buf, size_t size) {
else
else
len
=
rdma
::
write
(
rdmaSocket_
,
(
char
*
)
buf
+
total
,
size
-
total
);
len
=
rdma
::
write
(
rdmaSocket_
,
(
char
*
)
buf
+
total
,
size
-
total
);
P
CHECK
(
len
>=
0
)
<<
" peer="
<<
peerName_
;
CHECK
(
len
>=
0
)
<<
" peer="
<<
peerName_
;
if
(
len
<=
0
)
{
if
(
len
<=
0
)
{
return
total
;
return
total
;
}
}
...
@@ -98,10 +98,10 @@ static size_t readwritev(IOFunc iofunc,
...
@@ -98,10 +98,10 @@ static size_t readwritev(IOFunc iofunc,
while
(
size
<
total
)
{
while
(
size
<
total
)
{
ssize_t
len
=
ssize_t
len
=
iofunc
(
socket
,
&
iovs
[
curIov
],
std
::
min
(
iovcnt
-
curIov
,
maxiovs
));
iofunc
(
socket
,
&
iovs
[
curIov
],
std
::
min
(
iovcnt
-
curIov
,
maxiovs
));
P
CHECK
(
len
>
0
)
<<
" peer="
<<
peerName
<<
" curIov="
<<
curIov
CHECK
(
len
>
0
)
<<
" peer="
<<
peerName
<<
" curIov="
<<
curIov
<<
" iovCnt="
<<
iovcnt
<<
" iovCnt="
<<
iovcnt
<<
" iovs[curIov].base="
<<
iovs
[
curIov
].
iov_base
<<
" iovs[curIov].base="
<<
iovs
[
curIov
].
iov_base
<<
" iovs[curIov].iov_len="
<<
iovs
[
curIov
].
iov_len
;
<<
" iovs[curIov].iov_len="
<<
iovs
[
curIov
].
iov_len
;
size
+=
len
;
size
+=
len
;
/// restore iovs[curIov] to the original value
/// restore iovs[curIov] to the original value
...
@@ -183,7 +183,7 @@ void SocketChannel::writeMessage(const std::vector<struct iovec>& userIovs) {
...
@@ -183,7 +183,7 @@ void SocketChannel::writeMessage(const std::vector<struct iovec>& userIovs) {
header
.
totalLength
+=
iov
.
iov_len
;
header
.
totalLength
+=
iov
.
iov_len
;
}
}
P
CHECK
(
writev
(
iovs
)
==
(
size_t
)
header
.
totalLength
);
CHECK
(
writev
(
iovs
)
==
(
size_t
)
header
.
totalLength
);
}
}
std
::
unique_ptr
<
MsgReader
>
SocketChannel
::
readMessage
()
{
std
::
unique_ptr
<
MsgReader
>
SocketChannel
::
readMessage
()
{
...
@@ -194,7 +194,7 @@ std::unique_ptr<MsgReader> SocketChannel::readMessage() {
...
@@ -194,7 +194,7 @@ std::unique_ptr<MsgReader> SocketChannel::readMessage() {
return
nullptr
;
return
nullptr
;
}
}
P
CHECK
(
len
==
sizeof
(
header
));
CHECK
(
len
==
sizeof
(
header
));
std
::
unique_ptr
<
MsgReader
>
msgReader
(
new
MsgReader
(
this
,
header
.
numIovs
));
std
::
unique_ptr
<
MsgReader
>
msgReader
(
new
MsgReader
(
this
,
header
.
numIovs
));
...
@@ -209,7 +209,7 @@ std::unique_ptr<MsgReader> SocketChannel::readMessage() {
...
@@ -209,7 +209,7 @@ std::unique_ptr<MsgReader> SocketChannel::readMessage() {
MsgReader
::
MsgReader
(
SocketChannel
*
channel
,
size_t
numBlocks
)
MsgReader
::
MsgReader
(
SocketChannel
*
channel
,
size_t
numBlocks
)
:
channel_
(
channel
),
blockLengths_
(
numBlocks
),
currentBlockIndex_
(
0
)
{
:
channel_
(
channel
),
blockLengths_
(
numBlocks
),
currentBlockIndex_
(
0
)
{
size_t
size
=
numBlocks
*
sizeof
(
blockLengths_
[
0
]);
size_t
size
=
numBlocks
*
sizeof
(
blockLengths_
[
0
]);
P
CHECK
(
channel_
->
read
(
&
blockLengths_
[
0
],
size
)
==
size
);
CHECK
(
channel_
->
read
(
&
blockLengths_
[
0
],
size
)
==
size
);
}
}
void
MsgReader
::
readBlocks
(
const
std
::
vector
<
void
*>&
bufs
)
{
void
MsgReader
::
readBlocks
(
const
std
::
vector
<
void
*>&
bufs
)
{
...
@@ -223,12 +223,12 @@ void MsgReader::readBlocks(const std::vector<void*>& bufs) {
...
@@ -223,12 +223,12 @@ void MsgReader::readBlocks(const std::vector<void*>& bufs) {
++
currentBlockIndex_
;
++
currentBlockIndex_
;
}
}
P
CHECK
(
channel_
->
readv
(
&
iovs
)
==
totalLength
);
CHECK
(
channel_
->
readv
(
&
iovs
)
==
totalLength
);
}
}
void
MsgReader
::
readNextBlock
(
void
*
buf
)
{
void
MsgReader
::
readNextBlock
(
void
*
buf
)
{
CHECK_LT
(
currentBlockIndex_
,
blockLengths_
.
size
());
CHECK_LT
(
currentBlockIndex_
,
blockLengths_
.
size
());
P
CHECK
(
channel_
->
read
(
buf
,
getNextBlockLength
())
==
getNextBlockLength
());
CHECK
(
channel_
->
read
(
buf
,
getNextBlockLength
())
==
getNextBlockLength
());
++
currentBlockIndex_
;
++
currentBlockIndex_
;
}
}
...
...
paddle/pserver/test/SocketTest.cpp
浏览文件 @
3aa67981
...
@@ -113,7 +113,7 @@ void SocketServer::run() {
...
@@ -113,7 +113,7 @@ void SocketServer::run() {
/* First call to socket() function */
/* First call to socket() function */
socket_
=
socket
(
AF_INET
,
SOCK_STREAM
,
0
);
socket_
=
socket
(
AF_INET
,
SOCK_STREAM
,
0
);
P
CHECK
(
socket_
>=
0
)
<<
"ERROR opening socket"
;
CHECK
(
socket_
>=
0
)
<<
"ERROR opening socket"
;
/* Initialize socket structure */
/* Initialize socket structure */
bzero
((
char
*
)
&
serv_addr
,
sizeof
(
serv_addr
));
bzero
((
char
*
)
&
serv_addr
,
sizeof
(
serv_addr
));
...
@@ -122,7 +122,7 @@ void SocketServer::run() {
...
@@ -122,7 +122,7 @@ void SocketServer::run() {
serv_addr
.
sin_port
=
htons
(
port_
);
serv_addr
.
sin_port
=
htons
(
port_
);
/* Now bind the host address using bind() call.*/
/* Now bind the host address using bind() call.*/
P
CHECK
(
bind
(
socket_
,
(
struct
sockaddr
*
)
&
serv_addr
,
sizeof
(
serv_addr
))
>=
0
)
CHECK
(
bind
(
socket_
,
(
struct
sockaddr
*
)
&
serv_addr
,
sizeof
(
serv_addr
))
>=
0
)
<<
"ERROR on binding"
;
<<
"ERROR on binding"
;
/* Now start listening for the clients, here process will
/* Now start listening for the clients, here process will
...
@@ -134,7 +134,7 @@ void SocketServer::run() {
...
@@ -134,7 +134,7 @@ void SocketServer::run() {
while
(
true
)
{
while
(
true
)
{
/* Accept actual connection from the client */
/* Accept actual connection from the client */
newsockfd
=
accept
(
socket_
,
(
struct
sockaddr
*
)
&
cli_addr
,
&
clilen
);
newsockfd
=
accept
(
socket_
,
(
struct
sockaddr
*
)
&
cli_addr
,
&
clilen
);
P
CHECK
(
newsockfd
>=
0
)
<<
"ERROR on accept"
;
CHECK
(
newsockfd
>=
0
)
<<
"ERROR on accept"
;
SocketWorker
*
worker
=
new
SocketWorker
(
newsockfd
);
SocketWorker
*
worker
=
new
SocketWorker
(
newsockfd
);
worker
->
start
();
worker
->
start
();
...
@@ -146,17 +146,17 @@ void SocketWorker::run() {
...
@@ -146,17 +146,17 @@ void SocketWorker::run() {
while
(
true
)
{
while
(
true
)
{
int64_t
n
=
channel_
.
readAll
(
&
header
,
sizeof
(
header
));
int64_t
n
=
channel_
.
readAll
(
&
header
,
sizeof
(
header
));
P
CHECK
(
n
==
sizeof
(
header
))
<<
"ERROR reading from socket"
;
CHECK
(
n
==
sizeof
(
header
))
<<
"ERROR reading from socket"
;
buffer_
.
resize
(
header
.
dataLength
);
buffer_
.
resize
(
header
.
dataLength
);
n
=
channel_
.
readAll
(
&
buffer_
[
0
],
header
.
dataLength
);
n
=
channel_
.
readAll
(
&
buffer_
[
0
],
header
.
dataLength
);
P
CHECK
(
n
==
header
.
dataLength
)
<<
"ERROR reading from socket"
;
CHECK
(
n
==
header
.
dataLength
)
<<
"ERROR reading from socket"
;
/* Write a response to the client */
/* Write a response to the client */
n
=
channel_
.
writeAll
(
&
header
,
sizeof
(
header
));
n
=
channel_
.
writeAll
(
&
header
,
sizeof
(
header
));
P
CHECK
(
n
==
sizeof
(
header
))
<<
"ERROR reading from socket"
;
CHECK
(
n
==
sizeof
(
header
))
<<
"ERROR reading from socket"
;
n
=
channel_
.
writeAll
(
buffer_
.
data
(),
buffer_
.
size
());
n
=
channel_
.
writeAll
(
buffer_
.
data
(),
buffer_
.
size
());
P
CHECK
(
n
==
header
.
dataLength
)
<<
"ERROR writing to socket"
;
CHECK
(
n
==
header
.
dataLength
)
<<
"ERROR writing to socket"
;
}
}
}
}
...
@@ -177,9 +177,9 @@ SocketClient::SocketClient(const std::string& serverAddr, int serverPort) {
...
@@ -177,9 +177,9 @@ SocketClient::SocketClient(const std::string& serverAddr, int serverPort) {
/* Create a socket point */
/* Create a socket point */
int
sockfd
=
socket
(
AF_INET
,
SOCK_STREAM
,
0
);
int
sockfd
=
socket
(
AF_INET
,
SOCK_STREAM
,
0
);
P
CHECK
(
sockfd
>=
0
)
<<
"ERROR opening socket"
;
CHECK
(
sockfd
>=
0
)
<<
"ERROR opening socket"
;
server
=
gethostbyname
(
serverAddr
.
c_str
());
server
=
gethostbyname
(
serverAddr
.
c_str
());
P
CHECK
(
server
)
<<
"ERROR, no such host: "
<<
serverAddr
;
CHECK
(
server
)
<<
"ERROR, no such host: "
<<
serverAddr
;
bzero
((
char
*
)
&
serv_addr
,
sizeof
(
serv_addr
));
bzero
((
char
*
)
&
serv_addr
,
sizeof
(
serv_addr
));
serv_addr
.
sin_family
=
AF_INET
;
serv_addr
.
sin_family
=
AF_INET
;
...
@@ -189,7 +189,7 @@ SocketClient::SocketClient(const std::string& serverAddr, int serverPort) {
...
@@ -189,7 +189,7 @@ SocketClient::SocketClient(const std::string& serverAddr, int serverPort) {
serv_addr
.
sin_port
=
htons
(
serverPort
);
serv_addr
.
sin_port
=
htons
(
serverPort
);
/* Now connect to the server */
/* Now connect to the server */
P
CHECK
(
connect
(
sockfd
,
(
sockaddr
*
)
&
serv_addr
,
sizeof
(
serv_addr
))
>=
0
)
CHECK
(
connect
(
sockfd
,
(
sockaddr
*
)
&
serv_addr
,
sizeof
(
serv_addr
))
>=
0
)
<<
"ERROR connecting"
;
<<
"ERROR connecting"
;
channel_
.
reset
(
new
SocketChannel
(
sockfd
));
channel_
.
reset
(
new
SocketChannel
(
sockfd
));
...
@@ -234,18 +234,18 @@ int main(int argc, char** argv) {
...
@@ -234,18 +234,18 @@ int main(int argc, char** argv) {
cpuGrad
.
copyFrom
(
gpuGrad
);
cpuGrad
.
copyFrom
(
gpuGrad
);
header
.
dataLength
=
dataSize
;
header
.
dataLength
=
dataSize
;
P
CHECK
(
channel
->
writeAll
(
&
header
,
sizeof
(
header
))
==
sizeof
(
header
))
CHECK
(
channel
->
writeAll
(
&
header
,
sizeof
(
header
))
==
sizeof
(
header
))
<<
"Client write header error"
;
<<
"Client write header error"
;
P
CHECK
(
channel
->
writeAll
(
cpuGrad
.
getData
(),
dataSize
)
==
dataSize
)
CHECK
(
channel
->
writeAll
(
cpuGrad
.
getData
(),
dataSize
)
==
dataSize
)
<<
"Client write data error"
;
<<
"Client write data error"
;
/* Now read server response */
/* Now read server response */
P
CHECK
(
channel
->
readAll
(
&
header
,
sizeof
(
header
))
==
sizeof
(
header
))
CHECK
(
channel
->
readAll
(
&
header
,
sizeof
(
header
))
==
sizeof
(
header
))
<<
"Client read header error"
;
<<
"Client read header error"
;
CHECK_EQ
((
uint64_t
)
header
.
dataLength
,
dataSize
);
CHECK_EQ
((
uint64_t
)
header
.
dataLength
,
dataSize
);
P
CHECK
(
channel
->
readAll
(
cpuParam
.
getData
(),
dataSize
)
==
dataSize
)
CHECK
(
channel
->
readAll
(
cpuParam
.
getData
(),
dataSize
)
==
dataSize
)
<<
"Client read data error"
;
<<
"Client read data error"
;
gpuParam
.
copyFrom
(
cpuParam
);
gpuParam
.
copyFrom
(
cpuParam
);
...
...
paddle/scripts/docker/build.sh
浏览文件 @
3aa67981
...
@@ -78,7 +78,7 @@ paddle version
...
@@ -78,7 +78,7 @@ paddle version
# PaddlePaddle. This awkwardness is due to
# PaddlePaddle. This awkwardness is due to
# https://github.com/PaddlePaddle/Paddle/issues/1854. It also
# https://github.com/PaddlePaddle/Paddle/issues/1854. It also
# describes a solution.
# describes a solution.
if
[
${
WITH_DOC
}
==
"ON"
]
;
then
if
[
[
${
WITH_DOC
}
==
"ON"
]
]
;
then
cat
<<
EOF
cat
<<
EOF
========================================
========================================
Building documentation ...
Building documentation ...
...
...
paddle/scripts/travis/build_doc.sh
浏览文件 @
3aa67981
...
@@ -5,13 +5,14 @@ set -e
...
@@ -5,13 +5,14 @@ set -e
mkdir
-p
$TRAVIS_BUILD_DIR
/build
mkdir
-p
$TRAVIS_BUILD_DIR
/build
cd
$TRAVIS_BUILD_DIR
/build
cd
$TRAVIS_BUILD_DIR
/build
# Compile
Documentation only.
# Compile
paddle binaries first
cmake ..
-DCMAKE_BUILD_TYPE
=
Debug
-DWITH_GPU
=
OFF
-DWITH_DOC
=
OFF
-DWITH_STYLE_CHECK
=
OFF
cmake ..
-DCMAKE_BUILD_TYPE
=
Debug
-DWITH_GPU
=
OFF
-DWITH_DOC
=
OFF
-DWITH_
GOLANG
=
ON
-DWITH_
STYLE_CHECK
=
OFF
mkdir
output
mkdir
output
make
-j
`
nproc
`
make
-j
`
nproc
`
find ..
-name
'*whl'
| xargs pip
install
# install all wheels.
find ..
-name
'*whl'
| xargs pip
install
# install all wheels.
rm
-rf
*
rm
-rf
*
# Compile Documentation only.
cmake ..
-DCMAKE_BUILD_TYPE
=
Debug
-DWITH_GPU
=
OFF
-DWITH_DOC
=
ON
cmake ..
-DCMAKE_BUILD_TYPE
=
Debug
-DWITH_GPU
=
OFF
-DWITH_DOC
=
ON
make
-j
`
nproc
`
paddle_docs paddle_docs_cn
make
-j
`
nproc
`
paddle_docs paddle_docs_cn
...
@@ -25,7 +26,7 @@ SSH_REPO=${REPO/https:\/\/github.com\//git@github.com:}
...
@@ -25,7 +26,7 @@ SSH_REPO=${REPO/https:\/\/github.com\//git@github.com:}
SHA
=
`
git rev-parse
--verify
HEAD
`
SHA
=
`
git rev-parse
--verify
HEAD
`
# Documentation branch name
# Documentation branch name
# gh-pages branch is used for PaddlePaddle.org. The English version of
# gh-pages branch is used for PaddlePaddle.org. The English version of
# documentation in `doc` directory, and the chinese version in `doc_cn`
# documentation in `doc` directory, and the chinese version in `doc_cn`
# directory.
# directory.
TARGET_BRANCH
=
"gh-pages"
TARGET_BRANCH
=
"gh-pages"
...
@@ -51,7 +52,7 @@ function deploy_docs() {
...
@@ -51,7 +52,7 @@ function deploy_docs() {
# checkout github page branch
# checkout github page branch
git checkout
$TARGET_BRANCH
||
git checkout
--orphan
$TARGET_BRANCH
git checkout
$TARGET_BRANCH
||
git checkout
--orphan
$TARGET_BRANCH
mkdir
-p
${
DIR
}
mkdir
-p
${
DIR
}
# remove old docs. mv new docs.
# remove old docs. mv new docs.
set
+e
set
+e
...
@@ -62,7 +63,7 @@ function deploy_docs() {
...
@@ -62,7 +63,7 @@ function deploy_docs() {
git add
.
git add
.
}
}
deploy_docs
"master"
"."
deploy_docs
"master"
"."
deploy_docs
"develop"
"./develop/"
deploy_docs
"develop"
"./develop/"
# Check is there anything changed.
# Check is there anything changed.
...
...
paddle/trainer/Tester.cpp
浏览文件 @
3aa67981
...
@@ -175,7 +175,7 @@ real Tester::forwardOneBatch(const DataBatch& dataBatch,
...
@@ -175,7 +175,7 @@ real Tester::forwardOneBatch(const DataBatch& dataBatch,
}
}
hl_stream_synchronize
(
HPPL_STREAM_DEFAULT
);
hl_stream_synchronize
(
HPPL_STREAM_DEFAULT
);
FILE
*
fp
=
fopen
(
featFile
.
c_str
(),
"ab+"
);
FILE
*
fp
=
fopen
(
featFile
.
c_str
(),
"ab+"
);
P
CHECK
(
!
ferror
(
fp
))
<<
"Fail to open "
<<
featFile
;
CHECK
(
!
ferror
(
fp
))
<<
"Fail to open "
<<
featFile
;
size_t
sampleNum
=
featMatrices
[
0
]
->
getHeight
();
size_t
sampleNum
=
featMatrices
[
0
]
->
getHeight
();
for
(
size_t
i
=
0
;
i
<
sampleNum
;
++
i
)
{
for
(
size_t
i
=
0
;
i
<
sampleNum
;
++
i
)
{
...
...
paddle/utils/ThreadLocal.h
浏览文件 @
3aa67981
...
@@ -51,7 +51,7 @@ template <class T>
...
@@ -51,7 +51,7 @@ template <class T>
class
ThreadLocal
{
class
ThreadLocal
{
public:
public:
ThreadLocal
()
{
ThreadLocal
()
{
P
CHECK
(
pthread_key_create
(
&
threadSpecificKey_
,
dataDestructor
)
==
0
);
CHECK
(
pthread_key_create
(
&
threadSpecificKey_
,
dataDestructor
)
==
0
);
}
}
~
ThreadLocal
()
{
pthread_key_delete
(
threadSpecificKey_
);
}
~
ThreadLocal
()
{
pthread_key_delete
(
threadSpecificKey_
);
}
...
@@ -65,7 +65,7 @@ public:
...
@@ -65,7 +65,7 @@ public:
if
(
!
p
&&
createLocal
)
{
if
(
!
p
&&
createLocal
)
{
p
=
new
T
();
p
=
new
T
();
int
ret
=
pthread_setspecific
(
threadSpecificKey_
,
p
);
int
ret
=
pthread_setspecific
(
threadSpecificKey_
,
p
);
P
CHECK
(
ret
==
0
);
CHECK
(
ret
==
0
);
}
}
return
p
;
return
p
;
}
}
...
@@ -79,7 +79,7 @@ public:
...
@@ -79,7 +79,7 @@ public:
if
(
T
*
q
=
get
(
false
))
{
if
(
T
*
q
=
get
(
false
))
{
dataDestructor
(
q
);
dataDestructor
(
q
);
}
}
P
CHECK
(
pthread_setspecific
(
threadSpecificKey_
,
p
)
==
0
);
CHECK
(
pthread_setspecific
(
threadSpecificKey_
,
p
)
==
0
);
}
}
/**
/**
...
@@ -112,7 +112,7 @@ private:
...
@@ -112,7 +112,7 @@ private:
template
<
class
T
>
template
<
class
T
>
class
ThreadLocalD
{
class
ThreadLocalD
{
public:
public:
ThreadLocalD
()
{
P
CHECK
(
pthread_key_create
(
&
threadSpecificKey_
,
NULL
)
==
0
);
}
ThreadLocalD
()
{
CHECK
(
pthread_key_create
(
&
threadSpecificKey_
,
NULL
)
==
0
);
}
~
ThreadLocalD
()
{
~
ThreadLocalD
()
{
pthread_key_delete
(
threadSpecificKey_
);
pthread_key_delete
(
threadSpecificKey_
);
for
(
auto
t
:
threadMap_
)
{
for
(
auto
t
:
threadMap_
)
{
...
@@ -127,7 +127,7 @@ public:
...
@@ -127,7 +127,7 @@ public:
T
*
p
=
(
T
*
)
pthread_getspecific
(
threadSpecificKey_
);
T
*
p
=
(
T
*
)
pthread_getspecific
(
threadSpecificKey_
);
if
(
!
p
)
{
if
(
!
p
)
{
p
=
new
T
();
p
=
new
T
();
P
CHECK
(
pthread_setspecific
(
threadSpecificKey_
,
p
)
==
0
);
CHECK
(
pthread_setspecific
(
threadSpecificKey_
,
p
)
==
0
);
updateMap
(
p
);
updateMap
(
p
);
}
}
return
p
;
return
p
;
...
@@ -141,7 +141,7 @@ public:
...
@@ -141,7 +141,7 @@ public:
if
(
T
*
q
=
(
T
*
)
pthread_getspecific
(
threadSpecificKey_
))
{
if
(
T
*
q
=
(
T
*
)
pthread_getspecific
(
threadSpecificKey_
))
{
dataDestructor
(
q
);
dataDestructor
(
q
);
}
}
P
CHECK
(
pthread_setspecific
(
threadSpecificKey_
,
p
)
==
0
);
CHECK
(
pthread_setspecific
(
threadSpecificKey_
,
p
)
==
0
);
updateMap
(
p
);
updateMap
(
p
);
}
}
...
...
proto/OptimizerConfig.proto
浏览文件 @
3aa67981
...
@@ -78,11 +78,15 @@ enum DataType {
...
@@ -78,11 +78,15 @@ enum DataType {
repeated
bytes
content
=
2
;
repeated
bytes
content
=
2
;
}
}
message
LrPolicyState
{
// learninRate Policy
optional
double
learning_rate
=
1
[
default
=
1.0
];
optional
double
lr_decay_a
=
2
;
optional
double
lr_decay_b
=
3
;
}
message
SGDOptimizerState
{
message
SGDOptimizerState
{
// learning rate policy
optional
LrPolicyState
lr_state
=
101
;
optional
double
learning_rate
=
101
;
optional
double
lr_decay_a
=
102
;
optional
double
lr_decay_b
=
103
;
optional
double
num_sample_passed
=
104
;
optional
double
num_sample_passed
=
104
;
// state
// state
optional
TensorProto
parameter
=
1
;
optional
TensorProto
parameter
=
1
;
...
@@ -91,9 +95,7 @@ message SGDOptimizerState {
...
@@ -91,9 +95,7 @@ message SGDOptimizerState {
message
AdadeltaOptimizerState
{
message
AdadeltaOptimizerState
{
// learning rate policy
// learning rate policy
optional
double
learning_rate
=
101
;
optional
LrPolicyState
lr_state
=
101
;
optional
double
lr_decay_a
=
102
;
optional
double
lr_decay_b
=
103
;
optional
double
num_sample_passed
=
104
;
optional
double
num_sample_passed
=
104
;
// state
// state
optional
TensorProto
parameter
=
1
;
optional
TensorProto
parameter
=
1
;
...
@@ -102,11 +104,9 @@ message AdadeltaOptimizerState {
...
@@ -102,11 +104,9 @@ message AdadeltaOptimizerState {
optional
TensorProto
update_delta
=
4
;
optional
TensorProto
update_delta
=
4
;
}
}
message
AdagradOptimizerState
{
message
AdagradOptimizerState
{
// learning rate policy
optional
LrPolicyState
lr_state
=
101
;
optional
double
learning_rate
=
101
;
optional
double
lr_decay_a
=
102
;
optional
double
lr_decay_b
=
103
;
optional
double
num_sample_passed
=
104
;
optional
double
num_sample_passed
=
104
;
// state
// state
optional
TensorProto
parameter
=
1
;
optional
TensorProto
parameter
=
1
;
...
@@ -114,10 +114,7 @@ message AdagradOptimizerState {
...
@@ -114,10 +114,7 @@ message AdagradOptimizerState {
}
}
message
AdamOptimizerState
{
message
AdamOptimizerState
{
// learning rate policy
optional
LrPolicyState
lr_state
=
101
;
optional
double
learning_rate
=
101
;
optional
double
lr_decay_a
=
102
;
optional
double
lr_decay_b
=
103
;
optional
double
num_sample_passed
=
104
;
optional
double
num_sample_passed
=
104
;
// state
// state
optional
TensorProto
parameter
=
1
;
optional
TensorProto
parameter
=
1
;
...
...
python/CMakeLists.txt
浏览文件 @
3aa67981
...
@@ -29,7 +29,7 @@ configure_file(${CMAKE_CURRENT_SOURCE_DIR}/setup.py.in
...
@@ -29,7 +29,7 @@ configure_file(${CMAKE_CURRENT_SOURCE_DIR}/setup.py.in
add_custom_command
(
OUTPUT
${
OUTPUT_DIR
}
/.timestamp
add_custom_command
(
OUTPUT
${
OUTPUT_DIR
}
/.timestamp
COMMAND env
${
py_env
}
${
PYTHON_EXECUTABLE
}
setup.py bdist_wheel
COMMAND env
${
py_env
}
${
PYTHON_EXECUTABLE
}
setup.py bdist_wheel
COMMAND
${
CMAKE_COMMAND
}
-E touch
${
OUTPUT_DIR
}
/.timestamp
COMMAND
${
CMAKE_COMMAND
}
-E touch
${
OUTPUT_DIR
}
/.timestamp
DEPENDS gen_proto_py
${
PY_FILES
}
${
external_project_dependencies
}
${
COPY_PADDLE_MASTER
}
)
DEPENDS gen_proto_py
framework_py_proto
${
PY_FILES
}
${
external_project_dependencies
}
${
COPY_PADDLE_MASTER
}
)
add_custom_target
(
paddle_python ALL DEPENDS
add_custom_target
(
paddle_python ALL DEPENDS
${
OUTPUT_DIR
}
/.timestamp
)
${
OUTPUT_DIR
}
/.timestamp
)
...
@@ -43,6 +43,7 @@ if (WITH_TESTING)
...
@@ -43,6 +43,7 @@ if (WITH_TESTING)
add_subdirectory
(
paddle/v2/tests
)
add_subdirectory
(
paddle/v2/tests
)
add_subdirectory
(
paddle/v2/reader/tests
)
add_subdirectory
(
paddle/v2/reader/tests
)
add_subdirectory
(
paddle/v2/plot/tests
)
add_subdirectory
(
paddle/v2/plot/tests
)
add_subdirectory
(
paddle/v2/framework/tests
)
endif
()
endif
()
endif
()
endif
()
install
(
DIRECTORY
${
PADDLE_PYTHON_PACKAGE_DIR
}
install
(
DIRECTORY
${
PADDLE_PYTHON_PACKAGE_DIR
}
...
...
python/paddle/trainer/config_parser.py
浏览文件 @
3aa67981
...
@@ -1353,7 +1353,8 @@ class LayerBase(object):
...
@@ -1353,7 +1353,8 @@ class LayerBase(object):
device
=
None
,
device
=
None
,
active_type
=
""
,
active_type
=
""
,
drop_rate
=
0.
,
drop_rate
=
0.
,
coeff
=
None
):
coeff
=
None
,
error_clipping_threshold
=
None
):
config_assert
(
'@'
not
in
name
,
config_assert
(
'@'
not
in
name
,
"layer name: %s contain special character @"
%
name
)
"layer name: %s contain special character @"
%
name
)
global
g_current_submodel
global
g_current_submodel
...
@@ -1387,6 +1388,9 @@ class LayerBase(object):
...
@@ -1387,6 +1388,9 @@ class LayerBase(object):
elif
g_default_device
is
not
None
:
elif
g_default_device
is
not
None
:
self
.
config
.
device
=
g_default_device
self
.
config
.
device
=
g_default_device
if
error_clipping_threshold
is
not
None
:
self
.
config
.
error_clipping_threshold
=
error_clipping_threshold
for
input_index
in
xrange
(
len
(
self
.
inputs
)):
for
input_index
in
xrange
(
len
(
self
.
inputs
)):
input
=
self
.
inputs
[
input_index
]
input
=
self
.
inputs
[
input_index
]
input_config
=
None
input_config
=
None
...
@@ -2466,10 +2470,14 @@ class MaxLayer(LayerBase):
...
@@ -2466,10 +2470,14 @@ class MaxLayer(LayerBase):
trans_type
=
'non-seq'
,
trans_type
=
'non-seq'
,
bias
=
False
,
bias
=
False
,
output_max_index
=
None
,
output_max_index
=
None
,
stride
=-
1
,
**
xargs
):
**
xargs
):
super
(
MaxLayer
,
self
).
__init__
(
name
,
'max'
,
0
,
inputs
=
inputs
,
**
xargs
)
super
(
MaxLayer
,
self
).
__init__
(
name
,
'max'
,
0
,
inputs
=
inputs
,
**
xargs
)
config_assert
(
len
(
self
.
inputs
)
==
1
,
'MaxLayer must have 1 input'
)
config_assert
(
len
(
self
.
inputs
)
==
1
,
'MaxLayer must have 1 input'
)
if
trans_type
==
'seq'
:
config_assert
(
stride
==
-
1
,
'subseq does not support stride window'
)
self
.
config
.
trans_type
=
trans_type
self
.
config
.
trans_type
=
trans_type
self
.
config
.
seq_pool_stride
=
stride
for
input_index
in
xrange
(
len
(
self
.
inputs
)):
for
input_index
in
xrange
(
len
(
self
.
inputs
)):
input_layer
=
self
.
get_input_layer
(
input_index
)
input_layer
=
self
.
get_input_layer
(
input_index
)
self
.
set_layer_size
(
input_layer
.
size
)
self
.
set_layer_size
(
input_layer
.
size
)
...
@@ -2731,11 +2739,15 @@ class AverageLayer(LayerBase):
...
@@ -2731,11 +2739,15 @@ class AverageLayer(LayerBase):
average_strategy
=
'average'
,
average_strategy
=
'average'
,
trans_type
=
'non-seq'
,
trans_type
=
'non-seq'
,
bias
=
False
,
bias
=
False
,
stride
=-
1
,
**
xargs
):
**
xargs
):
super
(
AverageLayer
,
self
).
__init__
(
super
(
AverageLayer
,
self
).
__init__
(
name
,
'average'
,
0
,
inputs
=
inputs
,
**
xargs
)
name
,
'average'
,
0
,
inputs
=
inputs
,
**
xargs
)
self
.
config
.
average_strategy
=
average_strategy
self
.
config
.
average_strategy
=
average_strategy
if
trans_type
==
'seq'
:
config_assert
(
stride
==
-
1
,
'subseq does not support stride window'
)
self
.
config
.
trans_type
=
trans_type
self
.
config
.
trans_type
=
trans_type
self
.
config
.
seq_pool_stride
=
stride
config_assert
(
len
(
inputs
)
==
1
,
'AverageLayer must have 1 input'
)
config_assert
(
len
(
inputs
)
==
1
,
'AverageLayer must have 1 input'
)
for
input_index
in
xrange
(
len
(
self
.
inputs
)):
for
input_index
in
xrange
(
len
(
self
.
inputs
)):
input_layer
=
self
.
get_input_layer
(
input_index
)
input_layer
=
self
.
get_input_layer
(
input_index
)
...
@@ -2774,13 +2786,7 @@ class TensorLayer(LayerBase):
...
@@ -2774,13 +2786,7 @@ class TensorLayer(LayerBase):
@
config_layer
(
'mixed'
)
@
config_layer
(
'mixed'
)
class
MixedLayer
(
LayerBase
):
class
MixedLayer
(
LayerBase
):
def
__init__
(
self
,
def
__init__
(
self
,
name
,
inputs
,
size
=
0
,
bias
=
True
,
**
xargs
):
name
,
inputs
,
size
=
0
,
bias
=
True
,
error_clipping_threshold
=
None
,
**
xargs
):
config_assert
(
inputs
,
'inputs cannot be empty'
)
config_assert
(
inputs
,
'inputs cannot be empty'
)
super
(
MixedLayer
,
self
).
__init__
(
super
(
MixedLayer
,
self
).
__init__
(
name
,
'mixed'
,
size
,
inputs
=
inputs
,
**
xargs
)
name
,
'mixed'
,
size
,
inputs
=
inputs
,
**
xargs
)
...
@@ -2862,9 +2868,6 @@ class MixedLayer(LayerBase):
...
@@ -2862,9 +2868,6 @@ class MixedLayer(LayerBase):
self
.
config
.
bias_size
=
psize
self
.
config
.
bias_size
=
psize
self
.
create_bias_parameter
(
bias
,
psize
)
self
.
create_bias_parameter
(
bias
,
psize
)
if
error_clipping_threshold
is
not
None
:
self
.
config
.
error_clipping_threshold
=
error_clipping_threshold
# like MixedLayer, but no bias parameter
# like MixedLayer, but no bias parameter
@
config_func
@
config_func
...
...
python/paddle/trainer_config_helpers/layers.py
浏览文件 @
3aa67981
...
@@ -1247,10 +1247,19 @@ def pooling_layer(input,
...
@@ -1247,10 +1247,19 @@ def pooling_layer(input,
name
=
None
,
name
=
None
,
bias_attr
=
None
,
bias_attr
=
None
,
agg_level
=
AggregateLevel
.
TO_NO_SEQUENCE
,
agg_level
=
AggregateLevel
.
TO_NO_SEQUENCE
,
stride
=-
1
,
layer_attr
=
None
):
layer_attr
=
None
):
"""
"""
Pooling layer for sequence inputs, not used for Image.
Pooling layer for sequence inputs, not used for Image.
If stride > 0, this layer slides a window whose size is determined by stride,
and return the pooling value of the window as the output. Thus, a long sequence
will be shorten.
The parameter stride specifies the intervals at which to apply the pooling
operation. Note that for sequence with sub-sequence, the default value
of stride is -1.
The example usage is:
The example usage is:
.. code-block:: python
.. code-block:: python
...
@@ -1269,6 +1278,8 @@ def pooling_layer(input,
...
@@ -1269,6 +1278,8 @@ def pooling_layer(input,
:param pooling_type: Type of pooling, MaxPooling(default), AvgPooling,
:param pooling_type: Type of pooling, MaxPooling(default), AvgPooling,
SumPooling, SquareRootNPooling.
SumPooling, SquareRootNPooling.
:type pooling_type: BasePoolingType|None
:type pooling_type: BasePoolingType|None
:param stride: The step size between successive pooling regions.
:type stride: Int
:param bias_attr: Bias parameter attribute. False if no bias.
:param bias_attr: Bias parameter attribute. False if no bias.
:type bias_attr: ParameterAttribute|None|False
:type bias_attr: ParameterAttribute|None|False
:param layer_attr: The Extra Attributes for layer, such as dropout.
:param layer_attr: The Extra Attributes for layer, such as dropout.
...
@@ -1286,12 +1297,16 @@ def pooling_layer(input,
...
@@ -1286,12 +1297,16 @@ def pooling_layer(input,
extra_dict
[
'output_max_index'
]
=
pooling_type
.
output_max_index
extra_dict
[
'output_max_index'
]
=
pooling_type
.
output_max_index
extra_dict
.
update
(
ExtraLayerAttribute
.
to_kwargs
(
layer_attr
))
extra_dict
.
update
(
ExtraLayerAttribute
.
to_kwargs
(
layer_attr
))
if
agg_level
==
AggregateLevel
.
TO_SEQUENCE
:
assert
stride
==
-
1
Layer
(
Layer
(
name
=
name
,
name
=
name
,
type
=
pooling_type
.
name
,
type
=
pooling_type
.
name
,
inputs
=
[
Input
(
input
.
name
)],
inputs
=
[
Input
(
input
.
name
)],
bias
=
ParamAttr
.
to_bias
(
bias_attr
),
bias
=
ParamAttr
.
to_bias
(
bias_attr
),
trans_type
=
agg_level
,
trans_type
=
agg_level
,
stride
=
stride
,
**
extra_dict
)
**
extra_dict
)
return
LayerOutput
(
return
LayerOutput
(
...
@@ -1553,7 +1568,7 @@ def last_seq(input,
...
@@ -1553,7 +1568,7 @@ def last_seq(input,
:type name: basestring
:type name: basestring
:param input: Input layer name.
:param input: Input layer name.
:type input: LayerOutput
:type input: LayerOutput
:param stride:
window size
.
:param stride:
The step size between successive pooling regions
.
:type stride: Int
:type stride: Int
:param layer_attr: extra layer attributes.
:param layer_attr: extra layer attributes.
:type layer_attr: ExtraLayerAttribute.
:type layer_attr: ExtraLayerAttribute.
...
@@ -1609,7 +1624,7 @@ def first_seq(input,
...
@@ -1609,7 +1624,7 @@ def first_seq(input,
:type name: basestring
:type name: basestring
:param input: Input layer name.
:param input: Input layer name.
:type input: LayerOutput
:type input: LayerOutput
:param stride:
window size
.
:param stride:
The step size between successive pooling regions
.
:type stride: Int
:type stride: Int
:param layer_attr: extra layer attributes.
:param layer_attr: extra layer attributes.
:type layer_attr: ExtraLayerAttribute.
:type layer_attr: ExtraLayerAttribute.
...
@@ -4791,6 +4806,14 @@ def maxout_layer(input, groups, num_channels=None, name=None, layer_attr=None):
...
@@ -4791,6 +4806,14 @@ def maxout_layer(input, groups, num_channels=None, name=None, layer_attr=None):
So groups should be larger than 1, and the num of channels should be able
So groups should be larger than 1, and the num of channels should be able
to devided by groups.
to devided by groups.
.. math::
y_{si+j} = \max_k x_{gsi + sk + j}
g = groups
s = input.size / num_channels
0 \le i < num_channels / groups
0 \le j < s
0 \le k < groups
Please refer to Paper:
Please refer to Paper:
- Maxout Networks: http://www.jmlr.org/proceedings/papers/v28/goodfellow13.pdf
- Maxout Networks: http://www.jmlr.org/proceedings/papers/v28/goodfellow13.pdf
- Multi-digit Number Recognition from Street View
\
- Multi-digit Number Recognition from Street View
\
...
...
python/paddle/trainer_config_helpers/tests/configs/protostr/test_sequence_pooling.protostr
浏览文件 @
3aa67981
...
@@ -14,6 +14,7 @@ layers {
...
@@ -14,6 +14,7 @@ layers {
input_layer_name: "dat_in"
input_layer_name: "dat_in"
}
}
trans_type: "seq"
trans_type: "seq"
seq_pool_stride: -1
}
}
layers {
layers {
name: "__seq_pooling_1__"
name: "__seq_pooling_1__"
...
@@ -24,6 +25,7 @@ layers {
...
@@ -24,6 +25,7 @@ layers {
input_layer_name: "dat_in"
input_layer_name: "dat_in"
}
}
trans_type: "non-seq"
trans_type: "non-seq"
seq_pool_stride: -1
}
}
layers {
layers {
name: "__seq_pooling_2__"
name: "__seq_pooling_2__"
...
@@ -35,6 +37,7 @@ layers {
...
@@ -35,6 +37,7 @@ layers {
}
}
average_strategy: "average"
average_strategy: "average"
trans_type: "seq"
trans_type: "seq"
seq_pool_stride: -1
}
}
layers {
layers {
name: "__seq_pooling_3__"
name: "__seq_pooling_3__"
...
@@ -46,6 +49,7 @@ layers {
...
@@ -46,6 +49,7 @@ layers {
}
}
average_strategy: "average"
average_strategy: "average"
trans_type: "non-seq"
trans_type: "non-seq"
seq_pool_stride: -1
}
}
layers {
layers {
name: "__seq_pooling_4__"
name: "__seq_pooling_4__"
...
@@ -57,6 +61,7 @@ layers {
...
@@ -57,6 +61,7 @@ layers {
}
}
average_strategy: "sum"
average_strategy: "sum"
trans_type: "seq"
trans_type: "seq"
seq_pool_stride: -1
}
}
layers {
layers {
name: "__seq_pooling_5__"
name: "__seq_pooling_5__"
...
@@ -68,6 +73,7 @@ layers {
...
@@ -68,6 +73,7 @@ layers {
}
}
average_strategy: "sum"
average_strategy: "sum"
trans_type: "non-seq"
trans_type: "non-seq"
seq_pool_stride: -1
}
}
layers {
layers {
name: "__seq_pooling_6__"
name: "__seq_pooling_6__"
...
@@ -77,8 +83,44 @@ layers {
...
@@ -77,8 +83,44 @@ layers {
inputs {
inputs {
input_layer_name: "dat_in"
input_layer_name: "dat_in"
}
}
trans_type: "non-seq"
seq_pool_stride: 5
}
layers {
name: "__seq_pooling_7__"
type: "average"
size: 100
active_type: ""
inputs {
input_layer_name: "dat_in"
}
average_strategy: "average"
trans_type: "non-seq"
seq_pool_stride: 5
}
layers {
name: "__seq_pooling_8__"
type: "average"
size: 100
active_type: ""
inputs {
input_layer_name: "dat_in"
}
average_strategy: "sum"
trans_type: "non-seq"
seq_pool_stride: 5
}
layers {
name: "__seq_pooling_9__"
type: "max"
size: 100
active_type: ""
inputs {
input_layer_name: "dat_in"
}
output_max_index: true
output_max_index: true
trans_type: "non-seq"
trans_type: "non-seq"
seq_pool_stride: -1
}
}
input_layer_names: "dat_in"
input_layer_names: "dat_in"
output_layer_names: "__seq_pooling_0__"
output_layer_names: "__seq_pooling_0__"
...
@@ -88,6 +130,9 @@ output_layer_names: "__seq_pooling_3__"
...
@@ -88,6 +130,9 @@ output_layer_names: "__seq_pooling_3__"
output_layer_names: "__seq_pooling_4__"
output_layer_names: "__seq_pooling_4__"
output_layer_names: "__seq_pooling_5__"
output_layer_names: "__seq_pooling_5__"
output_layer_names: "__seq_pooling_6__"
output_layer_names: "__seq_pooling_6__"
output_layer_names: "__seq_pooling_7__"
output_layer_names: "__seq_pooling_8__"
output_layer_names: "__seq_pooling_9__"
sub_models {
sub_models {
name: "root"
name: "root"
layer_names: "dat_in"
layer_names: "dat_in"
...
@@ -98,6 +143,9 @@ sub_models {
...
@@ -98,6 +143,9 @@ sub_models {
layer_names: "__seq_pooling_4__"
layer_names: "__seq_pooling_4__"
layer_names: "__seq_pooling_5__"
layer_names: "__seq_pooling_5__"
layer_names: "__seq_pooling_6__"
layer_names: "__seq_pooling_6__"
layer_names: "__seq_pooling_7__"
layer_names: "__seq_pooling_8__"
layer_names: "__seq_pooling_9__"
input_layer_names: "dat_in"
input_layer_names: "dat_in"
output_layer_names: "__seq_pooling_0__"
output_layer_names: "__seq_pooling_0__"
output_layer_names: "__seq_pooling_1__"
output_layer_names: "__seq_pooling_1__"
...
@@ -106,6 +154,9 @@ sub_models {
...
@@ -106,6 +154,9 @@ sub_models {
output_layer_names: "__seq_pooling_4__"
output_layer_names: "__seq_pooling_4__"
output_layer_names: "__seq_pooling_5__"
output_layer_names: "__seq_pooling_5__"
output_layer_names: "__seq_pooling_6__"
output_layer_names: "__seq_pooling_6__"
output_layer_names: "__seq_pooling_7__"
output_layer_names: "__seq_pooling_8__"
output_layer_names: "__seq_pooling_9__"
is_recurrent_layer_group: false
is_recurrent_layer_group: false
}
}
python/paddle/trainer_config_helpers/tests/configs/test_sequence_pooling.py
浏览文件 @
3aa67981
...
@@ -14,6 +14,14 @@ for pt in POOL_TYPE:
...
@@ -14,6 +14,14 @@ for pt in POOL_TYPE:
for
al
in
AGG_LEVEL
:
for
al
in
AGG_LEVEL
:
opts
.
append
(
pooling_layer
(
input
=
din
,
agg_level
=
al
,
pooling_type
=
pt
()))
opts
.
append
(
pooling_layer
(
input
=
din
,
agg_level
=
al
,
pooling_type
=
pt
()))
for
pt
in
POOL_TYPE
:
opts
.
append
(
pooling_layer
(
input
=
din
,
agg_level
=
AggregateLevel
.
TO_NO_SEQUENCE
,
pooling_type
=
pt
(),
stride
=
5
))
opts
.
append
(
opts
.
append
(
pooling_layer
(
pooling_layer
(
input
=
din
,
pooling_type
=
MaxPooling
(
output_max_index
=
True
)))
input
=
din
,
pooling_type
=
MaxPooling
(
output_max_index
=
True
)))
...
...
python/paddle/v2/framework/__init__.py
0 → 100644
浏览文件 @
3aa67981
__all__
=
[
'proto'
]
python/paddle/v2/framework/tests/CMakeLists.txt
0 → 100644
浏览文件 @
3aa67981
add_python_test
(
test_framework test_protobuf.py
)
python/paddle/v2/framework/tests/test_protobuf.py
0 → 100644
浏览文件 @
3aa67981
import
paddle.v2.framework.proto.op_proto_pb2
import
paddle.v2.framework.proto.attr_type_pb2
import
unittest
class
TestFrameworkProto
(
unittest
.
TestCase
):
def
test_all
(
self
):
op_proto_lib
=
paddle
.
v2
.
framework
.
proto
.
op_proto_pb2
attr_type_lib
=
paddle
.
v2
.
framework
.
proto
.
attr_type_pb2
op_proto
=
op_proto_lib
.
OpProto
()
ipt0
=
op_proto
.
inputs
.
add
()
ipt0
.
name
=
"a"
ipt0
.
comment
=
"the input of cosine op"
ipt1
=
op_proto
.
inputs
.
add
()
ipt1
.
name
=
"b"
ipt1
.
comment
=
"the other input of cosine op"
opt
=
op_proto
.
outputs
.
add
()
opt
.
name
=
"output"
opt
.
comment
=
"the output of cosine op"
op_proto
.
comment
=
"cosine op, output = scale*cos(a, b)"
attr
=
op_proto
.
attrs
.
add
()
attr
.
name
=
"scale"
attr
.
comment
=
"scale of cosine op"
attr
.
type
=
attr_type_lib
.
FLOAT
op_proto
.
type
=
"cos"
self
.
assertTrue
(
op_proto
.
IsInitialized
())
python/paddle/v2/master/client.py
浏览文件 @
3aa67981
...
@@ -26,14 +26,22 @@ class client(object):
...
@@ -26,14 +26,22 @@ class client(object):
holder
[
idx
]
=
c_ptr
holder
[
idx
]
=
c_ptr
lib
.
paddle_set_dataset
(
self
.
c
,
holder
,
len
(
paths
))
lib
.
paddle_set_dataset
(
self
.
c
,
holder
,
len
(
paths
))
# return format: (record, errno)
# errno = 0: ok
# < 0: error
def
next_record
(
self
):
def
next_record
(
self
):
p
=
ctypes
.
c_char_p
()
p
=
ctypes
.
c_char_p
()
ret
=
ctypes
.
pointer
(
p
)
ret
=
ctypes
.
pointer
(
p
)
size
=
lib
.
paddle_next_record
(
self
.
c
,
ret
)
size
=
lib
.
paddle_next_record
(
self
.
c
,
ret
)
if
size
<
0
:
# Error
return
None
,
size
if
size
==
0
:
if
size
==
0
:
# Empty record
# Empty record
return
""
return
""
,
0
record
=
ret
.
contents
.
value
[:
size
]
record
=
ret
.
contents
.
value
[:
size
]
# Memory created from C should be freed.
# Memory created from C should be freed.
lib
.
mem_free
(
ret
.
contents
)
lib
.
mem_free
(
ret
.
contents
)
return
record
return
record
,
0
python/paddle/v2/reader/creator.py
浏览文件 @
3aa67981
...
@@ -57,17 +57,20 @@ def text_file(path):
...
@@ -57,17 +57,20 @@ def text_file(path):
return
reader
return
reader
def
recordio
(
path
):
def
recordio
_local
(
paths
,
buf_size
=
100
):
"""
"""
Creates a data reader that outputs record one one by one from given recordio file
Creates a data reader from given RecordIO file paths separated by ",",
:path: path of recordio file
glob pattern is supported.
:returns: data reader of recordio file
:path: path of recordio files.
:returns: data reader of recordio files.
"""
"""
import
recordio
as
rec
import
recordio
as
rec
import
paddle.v2.reader.decorator
as
dec
def
reader
():
def
reader
():
f
=
rec
.
reader
(
path
)
a
=
','
.
join
(
paths
)
f
=
rec
.
reader
(
a
)
while
True
:
while
True
:
r
=
f
.
read
()
r
=
f
.
read
()
if
r
is
None
:
if
r
is
None
:
...
@@ -75,4 +78,38 @@ def recordio(path):
...
@@ -75,4 +78,38 @@ def recordio(path):
yield
r
yield
r
f
.
close
()
f
.
close
()
return
dec
.
buffered
(
reader
,
buf_size
)
def
recordio
(
paths
,
buf_size
=
100
):
"""
Creates a data reader that outputs record one one by one
from given local or cloud recordio path.
:path: path of recordio files.
:returns: data reader of recordio files.
"""
import
os
import
paddle.v2.master.client
as
cloud
if
"KUBERNETES_SERVICE_HOST"
not
in
os
.
environ
.
keys
():
return
recordio_local
(
paths
)
host_name
=
"MASTER_SERVICE_HOST"
if
host_name
not
in
os
.
environ
.
keys
():
raise
Exception
(
'not find '
+
host_name
+
' in environ.'
)
addr
=
os
.
environ
(
host
)
def
reader
():
c
=
cloud
(
addr
,
buf_size
)
c
.
set_dataset
(
paths
)
while
True
:
r
,
err
=
client
.
next_record
()
if
err
<
0
:
break
yield
r
c
.
close
()
return
reader
return
reader
python/paddle/v2/reader/tests/creator_test.py
浏览文件 @
3aa67981
...
@@ -38,7 +38,7 @@ class TestRecordIO(unittest.TestCase):
...
@@ -38,7 +38,7 @@ class TestRecordIO(unittest.TestCase):
def
test_recordio
(
self
):
def
test_recordio
(
self
):
path
=
os
.
path
.
join
(
path
=
os
.
path
.
join
(
os
.
path
.
dirname
(
__file__
),
"test_recordio_creator.dat"
)
os
.
path
.
dirname
(
__file__
),
"test_recordio_creator.dat"
)
reader
=
paddle
.
v2
.
reader
.
creator
.
recordio
(
path
)
reader
=
paddle
.
v2
.
reader
.
creator
.
recordio
(
[
path
]
)
for
idx
,
r
in
enumerate
(
reader
()):
for
idx
,
r
in
enumerate
(
reader
()):
self
.
assertSequenceEqual
(
r
,
str
(
idx
))
self
.
assertSequenceEqual
(
r
,
str
(
idx
))
...
...
python/setup.py.in
浏览文件 @
3aa67981
...
@@ -9,7 +9,9 @@ packages=['paddle',
...
@@ -9,7 +9,9 @@ packages=['paddle',
'paddle.v2.dataset',
'paddle.v2.dataset',
'paddle.v2.reader',
'paddle.v2.reader',
'paddle.v2.master',
'paddle.v2.master',
'paddle.v2.plot']
'paddle.v2.plot',
'paddle.v2.framework',
'paddle.v2.framework.proto']
setup_requires=["requests",
setup_requires=["requests",
"numpy",
"numpy",
...
@@ -27,8 +29,11 @@ setup(name='paddle',
...
@@ -27,8 +29,11 @@ setup(name='paddle',
description='Parallel Distributed Deep Learning',
description='Parallel Distributed Deep Learning',
install_requires=setup_requires,
install_requires=setup_requires,
packages=packages,
packages=packages,
package_data={'paddle.v2.master': ['
${paddle_master_LIB_NAME}
'], },
package_data={'paddle.v2.master': ['
libpaddle_master.so
'], },
package_dir={
package_dir={
'': '${CMAKE_CURRENT_SOURCE_DIR}'
'': '${CMAKE_CURRENT_SOURCE_DIR}',
# The paddle.v2.framework.proto will be generated while compiling.
# So that package points to other directory.
'paddle.v2.framework.proto': '${PROJ_BINARY_ROOT}/paddle/framework'
},
},
)
)
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录