Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
BaiXuePrincess
Paddle
提交
87f9191e
P
Paddle
项目概览
BaiXuePrincess
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
87f9191e
编写于
4月 24, 2018
作者:
F
fengjiayi
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
fix Clang compile errors
上级
d67b9ced
变更
6
隐藏空白更改
内联
并排
Showing
6 changed file
with
19 addition
and
21 deletion
+19
-21
paddle/gserver/dataproviders/PyDataProvider2.cpp
paddle/gserver/dataproviders/PyDataProvider2.cpp
+1
-3
paddle/gserver/gradientmachines/MultiGradientMachine.cpp
paddle/gserver/gradientmachines/MultiGradientMachine.cpp
+1
-1
paddle/gserver/layers/RecurrentLayerGroup.cpp
paddle/gserver/layers/RecurrentLayerGroup.cpp
+1
-1
paddle/parameter/Argument.cpp
paddle/parameter/Argument.cpp
+7
-7
paddle/parameter/AverageOptimizer.cpp
paddle/parameter/AverageOptimizer.cpp
+6
-6
paddle/parameter/FirstOrderOptimizer.cpp
paddle/parameter/FirstOrderOptimizer.cpp
+3
-3
未找到文件。
paddle/gserver/dataproviders/PyDataProvider2.cpp
浏览文件 @
87f9191e
...
...
@@ -390,9 +390,7 @@ private:
if
(
this
->
loadThread_
)
{
// wait poolActualSize < poolSize;
std
::
unique_lock
<
std
::
mutex
>
l
(
mtx_
);
pushCV_
.
wait
(
l
,
[
this
,
additionalBatchSize
]
{
return
this
->
poolActualSize_
<
poolSize_
;
});
pushCV_
.
wait
(
l
,
[
this
]
{
return
this
->
poolActualSize_
<
poolSize_
;
});
}
{
...
...
paddle/gserver/gradientmachines/MultiGradientMachine.cpp
浏览文件 @
87f9191e
...
...
@@ -52,7 +52,7 @@ MultiGradientMachine::MultiGradientMachine(const ModelConfig& config,
}
else
{
numDevices_
=
0
;
}
ParamInitCallback
mainParamInitCb
=
[
this
](
int
paramId
,
Parameter
*
para
)
{
ParamInitCallback
mainParamInitCb
=
[](
int
paramId
,
Parameter
*
para
)
{
// only create buf for CPU parameters
// GPU parameters will be created in each thread
if
(
para
->
useGpu
())
return
;
...
...
paddle/gserver/layers/RecurrentLayerGroup.cpp
浏览文件 @
87f9191e
...
...
@@ -72,7 +72,7 @@ void RecurrentLayerGroup::initSubNetwork(
setNeedGradient
(
true
);
network_
.
reset
(
new
RecurrentGradientMachine
(
config_
.
name
(),
rootNetwork
));
ParamInitCallback
cb
=
[
this
,
rootNetwork
](
int
paramId
,
Parameter
*
para
)
{
ParamInitCallback
cb
=
[
rootNetwork
](
int
paramId
,
Parameter
*
para
)
{
para
->
enableSharedType
(
PARAMETER_VALUE
,
rootNetwork
->
getParameters
()[
paramId
]
->
getBuf
(
PARAMETER_VALUE
),
...
...
paddle/parameter/Argument.cpp
浏览文件 @
87f9191e
...
...
@@ -325,12 +325,12 @@ void Argument::concat(const std::vector<Argument>& args,
->
copyFrom
(
*
src
->
subVec
(
srcStartRow
,
size
),
stream
);
};
auto
copyStrs
=
[
batchSize
,
stream
](
SVectorPtr
&
dst
,
const
SVectorPtr
&
src
,
int
desStartRow
,
int
srcStartRow
,
int
size
,
bool
useGpu
)
{
auto
copyStrs
=
[
batchSize
](
SVectorPtr
&
dst
,
const
SVectorPtr
&
src
,
int
desStartRow
,
int
srcStartRow
,
int
size
,
bool
useGpu
)
{
if
(
!
src
)
{
dst
.
reset
();
return
;
...
...
@@ -413,7 +413,7 @@ void Argument::concat(const std::vector<Argument>& args,
dst
->
subVec
(
startRow
,
src
->
getSize
())
->
copyFrom
(
*
src
,
stream
);
};
auto
copyStrs
=
[
batchSize
,
stream
](
auto
copyStrs
=
[
batchSize
](
SVectorPtr
&
dst
,
const
SVectorPtr
&
src
,
int
startRow
,
bool
useGpu
)
{
if
(
!
src
)
{
dst
.
reset
();
...
...
paddle/parameter/AverageOptimizer.cpp
浏览文件 @
87f9191e
...
...
@@ -81,9 +81,9 @@ ParameterOptimizer::TraverseCallback AverageOptimizer::needSpecialTraversal(
if
(
numUpdates_
%
kMaxNumAccumulates
==
0
)
{
// Move the sum to a different buffer to avoid loss of precision
// due to too many sums.
callbacks
.
emplace_back
([
this
](
const
VectorPtr
vecs
[],
const
ParameterConfig
&
config
,
size_t
sparseId
)
{
callbacks
.
emplace_back
([](
const
VectorPtr
vecs
[],
const
ParameterConfig
&
config
,
size_t
sparseId
)
{
vecs
[
PARAMETER_SUM2
]
->
add
(
*
vecs
[
PARAMETER_SUM1
]);
vecs
[
PARAMETER_SUM1
]
->
zeroMem
();
});
...
...
@@ -94,9 +94,9 @@ ParameterOptimizer::TraverseCallback AverageOptimizer::needSpecialTraversal(
if
(
auto
callback
=
this
->
startCatchUpWith
())
{
callbacks
.
emplace_back
(
callback
);
}
callbacks
.
emplace_back
([
this
](
const
VectorPtr
vecs
[],
const
ParameterConfig
&
config
,
size_t
sparseId
)
{
callbacks
.
emplace_back
([](
const
VectorPtr
vecs
[],
const
ParameterConfig
&
config
,
size_t
sparseId
)
{
vecs
[
PARAMETER_SUM3
]
->
add
(
*
vecs
[
PARAMETER_SUM1
],
*
vecs
[
PARAMETER_SUM2
]);
vecs
[
PARAMETER_SUM1
]
->
zeroMem
();
vecs
[
PARAMETER_SUM2
]
->
zeroMem
();
...
...
paddle/parameter/FirstOrderOptimizer.cpp
浏览文件 @
87f9191e
...
...
@@ -145,9 +145,9 @@ AdagradParameterOptimizer::needSpecialTraversal(
if
(
numUpdates_
%
kMaxNumAccumulates
==
0
)
{
// Move the sum to a different buffer to avoid loss of precision
// due to too many sums.
return
[
this
](
const
VectorPtr
vecs
[],
const
ParameterConfig
&
config
,
size_t
sparseId
)
{
return
[](
const
VectorPtr
vecs
[],
const
ParameterConfig
&
config
,
size_t
sparseId
)
{
vecs
[
PARAMETER_GRADIENT_SQURESUM
]
->
add
(
*
vecs
[
PARAMETER_GRADIENT_SQURESUM1
]);
vecs
[
PARAMETER_GRADIENT_SQURESUM1
]
->
zeroMem
();
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录