Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
Crayon鑫
Paddle
提交
eefe5a7c
P
Paddle
项目概览
Crayon鑫
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1
Issue
1
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
eefe5a7c
编写于
12月 27, 2016
作者:
Y
Yu Yang
浏览文件
操作
浏览文件
下载
差异文件
Merge branch 'develop' of github.com:baidu/Paddle into feature/mnist_train_api
上级
f8e4b0b2
4ccd5ea1
变更
32
隐藏空白更改
内联
并排
Showing
32 changed file
with
205 addition
and
135 deletion
+205
-135
demo/quick_start/api_predict.sh
demo/quick_start/api_predict.sh
+1
-1
demo/quick_start/cluster/cluster_train.sh
demo/quick_start/cluster/cluster_train.sh
+44
-0
demo/quick_start/cluster/env.sh
demo/quick_start/cluster/env.sh
+28
-0
demo/quick_start/cluster/pserver.sh
demo/quick_start/cluster/pserver.sh
+26
-0
paddle/api/PaddleAPI.h
paddle/api/PaddleAPI.h
+15
-19
paddle/api/paddle_ld_flags.py
paddle/api/paddle_ld_flags.py
+5
-2
paddle/cuda/include/hl_base.h
paddle/cuda/include/hl_base.h
+25
-41
paddle/gserver/dataproviders/DataProvider.h
paddle/gserver/dataproviders/DataProvider.h
+1
-1
paddle/gserver/layers/GruCompute.h
paddle/gserver/layers/GruCompute.h
+1
-1
paddle/gserver/layers/LstmCompute.h
paddle/gserver/layers/LstmCompute.h
+1
-1
paddle/gserver/layers/MultinomialSampler.h
paddle/gserver/layers/MultinomialSampler.h
+1
-1
paddle/math/BaseMatrix.h
paddle/math/BaseMatrix.h
+1
-1
paddle/math/Matrix.h
paddle/math/Matrix.h
+1
-1
paddle/math/TensorExpression.h
paddle/math/TensorExpression.h
+1
-1
paddle/math/Vector.h
paddle/math/Vector.h
+1
-1
paddle/parameter/ParallelParameter.h
paddle/parameter/ParallelParameter.h
+1
-1
paddle/parameter/Parameter.h
paddle/parameter/Parameter.h
+1
-1
paddle/parameter/ParameterUpdateFunctions.h
paddle/parameter/ParameterUpdateFunctions.h
+1
-1
paddle/pserver/BaseClient.h
paddle/pserver/BaseClient.h
+1
-1
paddle/pserver/ParameterClient2.h
paddle/pserver/ParameterClient2.h
+1
-1
paddle/pserver/ParameterServer2.h
paddle/pserver/ParameterServer2.h
+1
-1
paddle/setup.py.in
paddle/setup.py.in
+4
-7
paddle/trainer/ThreadParameterUpdater.h
paddle/trainer/ThreadParameterUpdater.h
+2
-2
paddle/utils/CpuId.h
paddle/utils/CpuId.h
+1
-1
paddle/utils/DisableCopy.h
paddle/utils/DisableCopy.h
+0
-23
paddle/utils/Locks.h
paddle/utils/Locks.h
+1
-1
paddle/utils/Util.h
paddle/utils/Util.h
+1
-2
paddle/utils/Version.h
paddle/utils/Version.h
+1
-1
paddle/utils/common.h
paddle/utils/common.h
+11
-4
python/paddle/trainer_config_helpers/__init__.py
python/paddle/trainer_config_helpers/__init__.py
+1
-2
python/paddle/trainer_config_helpers/attrs.py
python/paddle/trainer_config_helpers/attrs.py
+25
-15
python/paddle/trainer_config_helpers/layer_math.py
python/paddle/trainer_config_helpers/layer_math.py
+0
-0
未找到文件。
demo/quick_start/api_predict.sh
浏览文件 @
eefe5a7c
...
...
@@ -17,7 +17,7 @@ set -e
#Note the default model is pass-00002, you shold make sure the model path
#exists or change the mode path.
#only test on trainer_config.lr.py
model
=
output/pass-00001/
model
=
output/
model/
pass-00001/
config
=
trainer_config.lr.py
label
=
data/labels.list
dict
=
data/dict.txt
...
...
demo/quick_start/cluster/cluster_train.sh
0 → 100755
浏览文件 @
eefe5a7c
#!/bin/bash
# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set
-e
# Should run pserver.sh before run this script.
bin_dir
=
$(
cd
`
dirname
$0
`
;
pwd
)
home_dir
=
$(
cd
"
${
bin_dir
}
/.."
;
pwd
)
source
"
$bin_dir
/env.sh"
model_dir
=
"
$bin_dir
/output"
log_file
=
"
$bin_dir
/train.log"
pushd
"
$home_dir
"
cfg
=
trainer_config.lr.py
paddle train
\
--config
=
$cfg
\
--save_dir
=
${
model_dir
}
\
--trainer_count
=
4
\
--local
=
0
\
--log_period
=
100
\
--num_passes
=
15
\
--use_gpu
=
false
\
--show_parameter_stats_period
=
100
\
--test_all_data_in_one_period
=
1
\
--num_gradient_servers
=
1
\
--nics
=
`
get_nics
`
\
--port
=
7164
\
--ports_num
=
1
\
--pservers
=
"127.0.0.1"
\
--comment
=
"paddle_trainer"
\
2>&1 |
tee
"
$log_file
"
popd
demo/quick_start/cluster/env.sh
0 → 100644
浏览文件 @
eefe5a7c
#!/bin/bash
# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set
-e
function
get_nics
()
{
machine
=
`
uname
-s
`
local
nics
=
""
if
[
"
$machine
"
==
"Linux"
]
;
then
nics
=
"lo"
elif
[
"
$machine
"
==
"Darwin"
]
;
then
nics
=
"lo0"
else
nics
=
"unsupport"
fi
echo
$nics
}
demo/quick_start/cluster/pserver.sh
0 → 100755
浏览文件 @
eefe5a7c
#!/bin/bash
# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set
-e
bin_dir
=
$(
cd
`
dirname
$0
`
;
pwd
)
source
"
$bin_dir
/env.sh"
paddle pserver
\
--nics
=
`
get_nics
`
\
--port
=
7164
\
--ports_num
=
1
\
--ports_num_for_sparse
=
1
\
--num_gradient_servers
=
1
\
--comment
=
"paddle_pserver"
\
2>&1 |
tee
'pserver.log'
paddle/api/PaddleAPI.h
浏览文件 @
eefe5a7c
...
...
@@ -20,15 +20,11 @@ limitations under the License. */
#include <string>
#include <vector>
#include "paddle/utils/GlobalConstants.h"
#include "paddle/utils/
TypeDefs
.h"
#include "paddle/utils/
common
.h"
/// Import PaddlePaddle's enumeration into global namespace.
using
namespace
paddle
::
enumeration_wrapper
;
// NOLINT
#define DISABLE_COPY_AND_ASSIGN(classname) \
classname(const classname& other); \
classname& operator=(const classname& other)
/**
* @brief Initialize paddle.
*
...
...
@@ -102,7 +98,7 @@ const size_t NO_SPARSE_ID = -1UL;
struct
MatrixPrivate
;
class
Matrix
{
Matrix
();
// User Cannot Create Matrix.
DISABLE_COPY
_AND_ASSIGN
(
Matrix
);
DISABLE_COPY
(
Matrix
);
static
Matrix
*
createByPaddleMatrixPtr
(
void
*
sharedPtr
);
public:
...
...
@@ -242,7 +238,7 @@ private:
struct
VectorPrivate
;
class
Vector
{
DISABLE_COPY
_AND_ASSIGN
(
Vector
);
DISABLE_COPY
(
Vector
);
Vector
();
static
Vector
*
createByPaddleVectorPtr
(
void
*
ptr
);
...
...
@@ -322,7 +318,7 @@ private:
struct
IVectorPrivate
;
class
IVector
{
IVector
();
DISABLE_COPY
_AND_ASSIGN
(
IVector
);
DISABLE_COPY
(
IVector
);
static
IVector
*
createByPaddleVectorPtr
(
void
*
ptr
);
public:
...
...
@@ -402,7 +398,7 @@ struct ArgumentsPrivate;
class
Arguments
{
private:
Arguments
();
// Internal Create.
DISABLE_COPY
_AND_ASSIGN
(
Arguments
);
DISABLE_COPY
(
Arguments
);
public:
/**
...
...
@@ -472,7 +468,7 @@ enum GradientMatchineCreateMode {
struct
ParameterConfigPrivate
;
class
ParameterConfig
{
DISABLE_COPY
_AND_ASSIGN
(
ParameterConfig
);
DISABLE_COPY
(
ParameterConfig
);
ParameterConfig
();
/**
...
...
@@ -502,7 +498,7 @@ private:
struct
OptimizationConfigPrivate
;
class
OptimizationConfig
{
DISABLE_COPY
_AND_ASSIGN
(
OptimizationConfig
);
DISABLE_COPY
(
OptimizationConfig
);
OptimizationConfig
();
public:
...
...
@@ -527,7 +523,7 @@ struct ParameterPrivate;
class
Parameter
{
private:
Parameter
();
DISABLE_COPY
_AND_ASSIGN
(
Parameter
);
DISABLE_COPY
(
Parameter
);
public:
virtual
~
Parameter
();
...
...
@@ -572,7 +568,7 @@ struct ModelConfigPrivate;
class
ModelConfig
{
private:
ModelConfig
();
DISABLE_COPY
_AND_ASSIGN
(
ModelConfig
);
DISABLE_COPY
(
ModelConfig
);
public:
virtual
~
ModelConfig
();
...
...
@@ -593,7 +589,7 @@ struct TrainerConfigPrivate;
class
TrainerConfig
{
private:
TrainerConfig
();
DISABLE_COPY
_AND_ASSIGN
(
TrainerConfig
);
DISABLE_COPY
(
TrainerConfig
);
public:
virtual
~
TrainerConfig
();
...
...
@@ -633,7 +629,7 @@ public:
struct
ParameterTraverseCallbackPrivate
;
class
ParameterTraverseCallback
{
DISABLE_COPY
_AND_ASSIGN
(
ParameterTraverseCallback
);
DISABLE_COPY
(
ParameterTraverseCallback
);
ParameterTraverseCallback
();
public:
...
...
@@ -655,7 +651,7 @@ private:
*/
struct
ParameterOptimizerPrivate
;
class
ParameterOptimizer
{
DISABLE_COPY
_AND_ASSIGN
(
ParameterOptimizer
);
DISABLE_COPY
(
ParameterOptimizer
);
ParameterOptimizer
();
public:
...
...
@@ -692,7 +688,7 @@ struct GradientMachinePrivate;
class
GradientMachine
{
private:
GradientMachine
();
DISABLE_COPY
_AND_ASSIGN
(
GradientMachine
);
DISABLE_COPY
(
GradientMachine
);
public:
virtual
~
GradientMachine
();
...
...
@@ -908,7 +904,7 @@ private:
TrainerPrivate
*
m
;
Trainer
();
Trainer
(
TrainerConfig
*
optConfig
,
GradientMachine
*
gm
);
DISABLE_COPY
_AND_ASSIGN
(
Trainer
);
DISABLE_COPY
(
Trainer
);
public:
virtual
~
Trainer
();
...
...
@@ -974,7 +970,7 @@ public:
struct
SequenceGeneratorPrivate
;
class
SequenceGenerator
{
DISABLE_COPY
_AND_ASSIGN
(
SequenceGenerator
);
DISABLE_COPY
(
SequenceGenerator
);
SequenceGenerator
();
public:
...
...
paddle/api/paddle_ld_flags.py
浏览文件 @
eefe5a7c
...
...
@@ -141,9 +141,12 @@ try:
def
c_flag
(
self
):
if
self
.
with_coverage
:
return
[
"-fprofile-arcs"
,
"-ftest-coverage"
,
"-O0"
,
"-g"
]
return
[
"-fprofile-arcs"
,
"-ftest-coverage"
,
"-O0"
,
"-g"
,
"-std=c++11"
]
else
:
return
None
return
[
"-std=c++11"
]
except
ImportError
:
class
PaddleLDFlag
(
object
):
...
...
paddle/cuda/include/hl_base.h
浏览文件 @
eefe5a7c
...
...
@@ -16,7 +16,31 @@ limitations under the License. */
#define HL_BASE_H_
#include <cstddef>
#include "paddle/utils/TypeDefs.h"
#ifdef PADDLE_TYPE_DOUBLE
#define HL_FLOAT_MAX 3.40282347e+38F
#define HL_FLOAT_MIN 1.17549435e-38F
using
real
=
double
;
#else
#define HL_FLOAT_MAX 1.7976931348623157e+308
#define HL_FLOAT_MIN 2.2250738585072014e-308
using
real
=
float
;
#endif
/**
* The maximum input value for exp, used to avoid overflow problem.
* currently only used for tanh function.
*/
#define EXP_MAX_INPUT 40.0
/**
* @brief DIVUP(x, y) is similar to ceil(x / y).
* @note For CUDA, DIVUP will be used to specify
* the size of blockDim.
*/
#ifndef DIVUP
#define DIVUP(x, y) (((x) + (y)-1) / (y))
#endif
/**
* HPPL is an internal high performance parallel computing library
...
...
@@ -181,46 +205,6 @@ typedef struct {
size_t
nnz
;
}
_hl_sparse_matrix_s
,
*
hl_sparse_matrix_s
;
#ifndef PADDLE_TYPE_DOUBLE
/**
* HPPL data type: real (float or double)
*
* if real == float
*
* HL_FLOAT_MAX: 3.40282347e+38F
*
* HL_FLOAT_MIN: 1.17549435e-38F
*/
#define HL_FLOAT_MAX 3.40282347e+38F
/**
* if real == double
*
* HL_FLOAT_MAX: 1.7976931348623157e+308
*
* HL_FLOAT_MIN: 2.2250738585072014e-308
*/
#define HL_FLOAT_MIN 1.17549435e-38F
#else
#define HL_FLOAT_MAX 1.7976931348623157e+308
#define HL_FLOAT_MIN 2.2250738585072014e-308
#endif
/**
* The maximum input value for exp, used to avoid overflow problem.
*
* Currently only used for tanh function.
*/
#define EXP_MAX_INPUT 40.0
/**
* @brief DIVUP(x, y) is similar to ceil(x / y).
* @note For CUDA, DIVUP will be used to specify
* the size of blockDim.
*/
#ifndef DIVUP
#define DIVUP(x, y) (((x) + (y)-1) / (y))
#endif
#ifdef __NVCC__
#include "cuda_runtime.h"
...
...
paddle/gserver/dataproviders/DataProvider.h
浏览文件 @
eefe5a7c
...
...
@@ -34,8 +34,8 @@ limitations under the License. */
#include "paddle/utils/Logging.h"
#include "paddle/utils/Queue.h"
#include "paddle/utils/ThreadLocal.h"
#include "paddle/utils/TypeDefs.h"
#include "paddle/utils/Util.h"
#include "paddle/utils/common.h"
namespace
paddle
{
/**
...
...
paddle/gserver/layers/GruCompute.h
浏览文件 @
eefe5a7c
...
...
@@ -16,7 +16,7 @@ limitations under the License. */
#include "ModelConfig.pb.h"
#include "hl_gpu.h"
#include "paddle/utils/
TypeDefs
.h"
#include "paddle/utils/
common
.h"
namespace
paddle
{
...
...
paddle/gserver/layers/LstmCompute.h
浏览文件 @
eefe5a7c
...
...
@@ -16,7 +16,7 @@ limitations under the License. */
#include "ModelConfig.pb.h"
#include "hl_gpu.h"
#include "paddle/utils/
TypeDefs
.h"
#include "paddle/utils/
common
.h"
namespace
paddle
{
...
...
paddle/gserver/layers/MultinomialSampler.h
浏览文件 @
eefe5a7c
...
...
@@ -16,7 +16,7 @@ limitations under the License. */
#include <memory>
#include <random>
#include "paddle/utils/
TypeDefs
.h"
#include "paddle/utils/
common
.h"
namespace
paddle
{
...
...
paddle/math/BaseMatrix.h
浏览文件 @
eefe5a7c
...
...
@@ -16,7 +16,7 @@ limitations under the License. */
#include <stdint.h>
#include <cstddef>
#include "TensorExpression.h"
#include "paddle/utils/
TypeDefs
.h"
#include "paddle/utils/
common
.h"
namespace
paddle
{
...
...
paddle/math/Matrix.h
浏览文件 @
eefe5a7c
...
...
@@ -27,7 +27,7 @@ limitations under the License. */
#include "MemoryHandle.h"
#include "Vector.h"
#include "paddle/utils/ThreadLocal.h"
#include "paddle/utils/
TypeDefs
.h"
#include "paddle/utils/
common
.h"
namespace
paddle
{
...
...
paddle/math/TensorExpression.h
浏览文件 @
eefe5a7c
...
...
@@ -17,7 +17,7 @@ limitations under the License. */
#include <cstddef>
#include "hl_tensor_ops.h"
#include "paddle/utils/Logging.h"
#include "paddle/utils/
TypeDefs
.h"
#include "paddle/utils/
common
.h"
namespace
paddle
{
...
...
paddle/math/Vector.h
浏览文件 @
eefe5a7c
...
...
@@ -22,7 +22,7 @@ limitations under the License. */
#include "BaseMatrix.h"
#include "MemoryHandle.h"
#include "paddle/utils/Thread.h"
#include "paddle/utils/
TypeDefs
.h"
#include "paddle/utils/
common
.h"
namespace
paddle
{
...
...
paddle/parameter/ParallelParameter.h
浏览文件 @
eefe5a7c
...
...
@@ -28,7 +28,7 @@ limitations under the License. */
#include "paddle/parameter/ParameterUpdateFunctions.h"
#include "paddle/utils/Flags.h"
#include "paddle/utils/Locks.h"
#include "paddle/utils/
TypeDefs
.h"
#include "paddle/utils/
common
.h"
#include "ParameterConfig.pb.h"
...
...
paddle/parameter/Parameter.h
浏览文件 @
eefe5a7c
...
...
@@ -29,8 +29,8 @@ limitations under the License. */
#include "paddle/utils/GlobalConstants.h"
#include "paddle/utils/Locks.h"
#include "paddle/utils/ThreadLocal.h"
#include "paddle/utils/TypeDefs.h"
#include "paddle/utils/Util.h"
#include "paddle/utils/common.h"
namespace
paddle
{
...
...
paddle/parameter/ParameterUpdateFunctions.h
浏览文件 @
eefe5a7c
...
...
@@ -15,7 +15,7 @@ limitations under the License. */
#pragma once
#include "paddle/math/Vector.h"
#include "paddle/utils/
TypeDefs
.h"
#include "paddle/utils/
common
.h"
namespace
paddle
{
...
...
paddle/pserver/BaseClient.h
浏览文件 @
eefe5a7c
...
...
@@ -18,7 +18,7 @@ limitations under the License. */
#include "paddle/math/Matrix.h"
#include "paddle/pserver/ProtoServer.h"
#include "paddle/utils/Queue.h"
#include "paddle/utils/
TypeDefs
.h"
#include "paddle/utils/
common
.h"
namespace
paddle
{
...
...
paddle/pserver/ParameterClient2.h
浏览文件 @
eefe5a7c
...
...
@@ -26,8 +26,8 @@ limitations under the License. */
#include "paddle/utils/Flags.h"
#include "paddle/utils/Locks.h"
#include "paddle/utils/Queue.h"
#include "paddle/utils/TypeDefs.h"
#include "paddle/utils/Util.h"
#include "paddle/utils/common.h"
#include "ParameterService.pb.h"
...
...
paddle/pserver/ParameterServer2.h
浏览文件 @
eefe5a7c
...
...
@@ -32,7 +32,7 @@ limitations under the License. */
#include "paddle/utils/Locks.h"
#include "paddle/utils/Stat.h"
#include "paddle/utils/ThreadLocal.h"
#include "paddle/utils/
TypeDefs
.h"
#include "paddle/utils/
common
.h"
#include "ParameterService.pb.h"
...
...
paddle/setup.py.in
浏览文件 @
eefe5a7c
...
...
@@ -30,8 +30,10 @@ is_lin = (system == 'linux')
# The extra links will passed from COMAKE
# because generate paddle LDFLAGS is too complicated to do in setup.py
# it just read COMAKE generated LDFLAGS.
extra_comps = []
extra_links = []
obj = api.paddle_ld_flags.PaddleLDFlag()
extra_comps = obj.c_flag()
ldflags = obj.ldflag_str()
if ldflags is not None:
extra_links.extend(ldflags.split(" "))
...
...
@@ -51,20 +53,15 @@ elif is_osx == True:
include_dirs = [np.get_include(), "../"] # include numpy and paddle.
extra_c = obj.c_flag()
attr=dict()
if extra_c is not None:
attr["extra_compile_args"] = extra_c
setup(name="py_paddle",
version="@PADDLE_VERSION@",
ext_modules=[
Extension('py_paddle._swig_paddle', # Build SWIG Extension.
['Paddle_wrap.cxx'],
language = "c++",
include_dirs = include_dirs,
extra_link_args = extra_links,
**attr
extra_compile_args = extra_comps
)
],
packages=['py_paddle'],
...
...
paddle/trainer/ThreadParameterUpdater.h
浏览文件 @
eefe5a7c
...
...
@@ -33,8 +33,8 @@ namespace paddle {
because at the current moment, the merging on CPU is happening on the
main thread, and the its parameter size can be much larger than the one GPU.
Thus, for GPU, the parameter updates happens in updateImpl() function, which
is called by gradient machines as a callback function
as a callback function
supplied to backward()
and forwardBackward().
is called by gradient machines as a callback function
supplied to backward()
and forwardBackward().
For CPU, the parameter updates happens in separate threads maintained by this
class.
*/
...
...
paddle/utils/CpuId.h
浏览文件 @
eefe5a7c
...
...
@@ -11,7 +11,7 @@ limitations under the License. */
#pragma once
#include "
DisableCopy
.h"
#include "
common
.h"
namespace
paddle
{
...
...
paddle/utils/DisableCopy.h
已删除
100644 → 0
浏览文件 @
f8e4b0b2
/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#pragma once
/**
* Disable copy macro.
*/
#define DISABLE_COPY(CLASS_NAME) \
CLASS_NAME(CLASS_NAME &&) = delete; \
CLASS_NAME(const CLASS_NAME &other) = delete; \
CLASS_NAME &operator=(const CLASS_NAME &other) = delete
paddle/utils/Locks.h
浏览文件 @
eefe5a7c
...
...
@@ -19,7 +19,7 @@ limitations under the License. */
#include <condition_variable>
#include <mutex>
#include "
DisableCopy
.h"
#include "
common
.h"
namespace
paddle
{
...
...
paddle/utils/Util.h
浏览文件 @
eefe5a7c
...
...
@@ -26,12 +26,11 @@ limitations under the License. */
#include <unordered_map>
#include <vector>
#include "DisableCopy.h"
#include "Logging.h"
#include "TrainerConfig.pb.h"
#include "common.h"
#include "Flags.h"
#include "TypeDefs.h"
#include "hl_gpu.h"
/**
...
...
paddle/utils/Version.h
浏览文件 @
eefe5a7c
...
...
@@ -15,7 +15,7 @@ limitations under the License. */
#pragma once
#include <stddef.h>
#include <iostream>
#include "
TypeDefs
.h"
#include "
common
.h"
namespace
paddle
{
...
...
paddle/utils/
TypeDefs
.h
→
paddle/utils/
common
.h
浏览文件 @
eefe5a7c
...
...
@@ -15,12 +15,19 @@ limitations under the License. */
#pragma once
namespace
paddle
{
/**
* Disable copy macro.
*/
#define DISABLE_COPY(class_name) \
class_name(class_name &&) = delete; \
class_name(const class_name &other) = delete; \
class_name &operator=(const class_name &other) = delete
#ifdef PADDLE_TYPE_DOUBLE
typedef
double
real
;
using
real
=
double
;
#else
typedef
float
real
;
using
real
=
float
;
#endif
}
// namespace paddle
using
paddle
::
real
;
python/paddle/trainer_config_helpers/__init__.py
浏览文件 @
eefe5a7c
...
...
@@ -21,6 +21,5 @@ from networks import *
from
optimizers
import
*
from
attrs
import
*
from
config_parser_utils
import
*
# This will enable operator overload for LayerOutput
import
math
as
layer_math
import
layer_math
python/paddle/trainer_config_helpers/attrs.py
浏览文件 @
eefe5a7c
...
...
@@ -19,34 +19,34 @@ __all__ = [
def
convert_and_compare
(
x
,
Type
):
"""
Convert x to be the same type as Type and then convert back to
check whether there is a loss of information
:param x: object to be checked
:param Type: target type to check x over
"""
Convert x to be the same type as Type and then convert back to
check whether there is a loss of information
:param x: object to be checked
:param Type: target type to check x over
"""
return
type
(
x
)(
Type
(
x
))
==
x
def
is_compatible_with
(
x
,
Type
):
"""
Check if x has a type compatible with Type
:param x: object to be checked
:param Type: target type to check x over
"""
Check if x has a type compatible with Type
:param x: object to be checked
:param Type: target type to check x over
"""
if
type
(
x
)
==
Type
:
return
True
try
:
if
float
==
Type
or
int
==
Type
:
# avoid those types that can be converted to float/int but not very
# meaningful and could potentially lead to error
# i.e., str and bool typed value should not be used for initializing float/int variable
# avoid those types that can be converted to float/int but not very
# meaningful and could potentially lead to error
# i.e., str and bool typed value should not be used for initializing float/int variable
if
not
isinstance
(
x
,
str
)
and
not
isinstance
(
x
,
bool
):
return
convert_and_compare
(
x
,
Type
)
elif
bool
==
Type
:
# should not use string type to initialize bool variable
# should not use string type to initialize bool variable
if
not
isinstance
(
x
,
str
):
return
convert_and_compare
(
x
,
Type
)
else
:
...
...
@@ -88,6 +88,10 @@ class ParameterAttribute(object):
:type learning_rate: float or None
:param momentum: The parameter momentum. None means use global value.
:type momentum: float or None
:param gradient_clipping_threshold: gradient clipping threshold. If gradient
value larger than some value, will be
clipped.
:type gradient_clipping_threshold: float
:param sparse_update: Enable sparse update for this parameter. It will
enable both local and remote sparse update.
:type sparse_update: bool
...
...
@@ -104,6 +108,7 @@ class ParameterAttribute(object):
l2_rate
=
None
,
learning_rate
=
None
,
momentum
=
None
,
gradient_clipping_threshold
=
None
,
sparse_update
=
False
):
# initialize strategy.
if
is_static
:
...
...
@@ -152,6 +157,11 @@ class ParameterAttribute(object):
self
.
attr
[
'sparse_update'
]
=
True
self
.
attr
[
'sparse_remote_update'
]
=
True
if
gradient_clipping_threshold
is
not
None
and
\
is_compatible_with
(
gradient_clipping_threshold
,
float
):
self
.
attr
[
'gradient_clipping_threshold'
]
=
\
gradient_clipping_threshold
def
set_default_parameter_name
(
self
,
name
):
"""
Set default parameter name. If parameter not set, then will use default
...
...
python/paddle/trainer_config_helpers/math.py
→
python/paddle/trainer_config_helpers/
layer_
math.py
浏览文件 @
eefe5a7c
文件已移动
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录