Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
s920243400
PaddleDetection
提交
310ef226
P
PaddleDetection
项目概览
s920243400
/
PaddleDetection
与 Fork 源项目一致
Fork自
PaddlePaddle / PaddleDetection
通知
2
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleDetection
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
310ef226
编写于
9月 12, 2017
作者:
Z
zhangchao41
浏览文件
操作
浏览文件
下载
差异文件
Merge branch 'develop' of
https://github.com/PaddlePaddle/Paddle
into my-paddle
上级
297b3d0f
8be9930f
变更
13
隐藏空白更改
内联
并排
Showing
13 changed file
with
551 addition
and
274 deletion
+551
-274
doc/howto/dev/write_docs_cn.rst
doc/howto/dev/write_docs_cn.rst
+25
-39
paddle/framework/CMakeLists.txt
paddle/framework/CMakeLists.txt
+1
-0
paddle/framework/lod_tensor.h
paddle/framework/lod_tensor.h
+4
-1
paddle/framework/lod_tensor_test.cu
paddle/framework/lod_tensor_test.cu
+52
-0
paddle/gserver/layers/Layer.h
paddle/gserver/layers/Layer.h
+6
-5
paddle/gserver/layers/MKLDNNFcLayer.cpp
paddle/gserver/layers/MKLDNNFcLayer.cpp
+81
-131
paddle/gserver/layers/MKLDNNFcLayer.h
paddle/gserver/layers/MKLDNNFcLayer.h
+17
-24
paddle/gserver/layers/MKLDNNLayer.h
paddle/gserver/layers/MKLDNNLayer.h
+169
-32
paddle/gserver/tests/MKLDNNTester.cpp
paddle/gserver/tests/MKLDNNTester.cpp
+45
-33
paddle/gserver/tests/MKLDNNTester.h
paddle/gserver/tests/MKLDNNTester.h
+7
-5
paddle/operators/math/im2col_test.cc
paddle/operators/math/im2col_test.cc
+1
-1
paddle/pybind/pybind.cc
paddle/pybind/pybind.cc
+62
-0
python/paddle/v2/framework/tests/test_tensor.py
python/paddle/v2/framework/tests/test_tensor.py
+81
-3
未找到文件。
doc/howto/dev/write_docs_cn.rst
浏览文件 @
310ef226
...
@@ -5,15 +5,13 @@
...
@@ -5,15 +5,13 @@
PaddlePaddle的文档包括英文文档 ``doc`` 和中文文档 ``doc_cn`` 两个部分。文档都是通过 `cmake`_ 驱动 `sphinx`_ 编译生成,生成后的文档分别存储在编译目录的 ``doc`` 和 ``doc_cn`` 两个子目录下。
PaddlePaddle的文档包括英文文档 ``doc`` 和中文文档 ``doc_cn`` 两个部分。文档都是通过 `cmake`_ 驱动 `sphinx`_ 编译生成,生成后的文档分别存储在编译目录的 ``doc`` 和 ``doc_cn`` 两个子目录下。
如何构建
PaddlePaddle的
文档
如何构建文档
============
==============
============
PaddlePaddle的文档构建有直接构建和基于Docker构建两种方式,我们提供了一个构建脚本build_docs.sh来进行构建。
PaddlePaddle的文档构建有两种方式。
PaddlePaddle文档需要准备的环境相对较复杂,所以我们推荐使用基于Docker来构建PaddlePaddle的文档。
使用Docker构建
使用Docker构建PaddlePaddle的文档
--------------
--------------------------------
使用Docker构建PaddlePaddle的文档,需要在系统里先安装好Docker工具包。Docker安装请参考 `Docker的官网 <https://docs.docker.com/>`_ 。安装好Docker之后可以使用源码目录下的脚本构建文档,即
使用Docker构建PaddlePaddle的文档,需要在系统里先安装好Docker工具包。Docker安装请参考 `Docker的官网 <https://docs.docker.com/>`_ 。安装好Docker之后可以使用源码目录下的脚本构建文档,即
...
@@ -21,58 +19,46 @@ PaddlePaddle文档需要准备的环境相对较复杂,所以我们推荐使
...
@@ -21,58 +19,46 @@ PaddlePaddle文档需要准备的环境相对较复杂,所以我们推荐使
cd TO_YOUR_PADDLE_CLONE_PATH
cd TO_YOUR_PADDLE_CLONE_PATH
cd paddle/scripts/tools/build_docs
cd paddle/scripts/tools/build_docs
bash build_docs.sh with_docker
sh build_docs.sh
编译完成后,会在当前目录生成两个子目录\:
* doc 英文文档目录
* doc_cn 中文文档目录
编译完成之后,会在当前目录生成两个子目录\: doc(英文文档目录)和 doc_cn(中文文档目录)。
打开浏览器访问对应目录下的index.html即可访问本地文档。
打开浏览器访问对应目录下的index.html即可访问本地文档。
直接构建
--------
直接构建PaddlePaddle的文档
--------------------------
因为PaddlePaddle的v2 api文档生成过程依赖于py_paddle Python包,用户需要首先确认py_paddle包已经安装。
.. code-block:: bash
python -c "import py_paddle"
如果提示错误,那么用户需要在本地编译安装PaddlePaddle,请参考 `源码编译文档 <http://doc.paddlepaddle.org/develop/doc/getstarted/build_and_install/build_from_source_en.html>`_ 。
注意,用户在首次编译安装PaddlePaddle时,请将WITH_DOC选项关闭。在编译安装正确之后,请再次确认py_paddle包已经安装,即可进行下一步操作。
如果提示正确,可以执行以下命令编译生成文档,即
如果提示正确,可以执行以下命令编译生成文档,即
.. code-block:: bash
.. code-block:: bash
cd TO_YOUR_PADDLE_CLONE_PATH
cd TO_YOUR_PADDLE_CLONE_PATH
cd paddle/scripts/tools/build_docs
mkdir -p build
bash build_docs.sh local
cd build
cmake .. -DCMAKE_BUILD_TYPE=Debug -DWITH_GPU=OFF -DWITH_MKLDNN=OFF -DWITH_MKLML=OFF -DWITH_DOC=ON
编译完成之后,会在当前目录生成两个子目录\:
make gen_proto_py
make paddle_docs paddle_docs_cn
* doc 英文文档目录
* doc_cn 中文文档目录
编译完成之后,会在当前目录生成两个子目录\: doc(英文文档目录)和 doc_cn(中文文档目录)。
打开浏览器访问对应目录下的index.html即可访问本地文档。
打开浏览器访问对应目录下的index.html即可访问本地文档。
如何书写
PaddlePaddle的
文档
如何书写文档
============
==============
============
PaddlePaddle文档使用 `sphinx`_ 自动生成,用户可以参考sphinx教程进行书写。
PaddlePaddle文档使用 `sphinx`_ 自动生成,用户可以参考sphinx教程进行书写。
如何更新www.paddlepaddle.org文档
如何更新文档主题
================================
================
PaddlePaddle文档主题在 `TO_YOUR_PADDLE_CLONE_PATH/doc_theme` 文件夹下,包含所有和前端网页设计相关的文件。
开发者给PaddlePaddle代码增加的注释以PR的形式提交到github中,提交方式可参见 `贡献文档 <http://doc.paddlepaddle.org/develop/doc_cn/howto/dev/contribute_to_paddle_cn.html>`_ 。
如何更新doc.paddlepaddle.org
============================
更新的文档以PR的形式提交到github中,提交方式参见 `贡献文档 <http://doc.paddlepaddle.org/develop/doc_cn/howto/dev/contribute_to_paddle_cn.html>`_ 。
目前PaddlePaddle的develop分支的文档是自动触发更新的,用户可以分别查看最新的 `中文文档 <http://doc.paddlepaddle.org/develop/doc_cn/>`_ 和
目前PaddlePaddle的develop分支的文档是自动触发更新的,用户可以分别查看最新的 `中文文档 <http://doc.paddlepaddle.org/develop/doc_cn/>`_ 和
`英文文档 <http://doc.paddlepaddle.org/develop/doc/>`_ 。
`英文文档 <http://doc.paddlepaddle.org/develop/doc/>`_ 。
.. _cmake: https://cmake.org/
.. _cmake: https://cmake.org/
.. _sphinx: http://www.sphinx-doc.org/en/1.4.8/
.. _sphinx: http://www.sphinx-doc.org/en/1.4.8/
paddle/framework/CMakeLists.txt
浏览文件 @
310ef226
...
@@ -9,6 +9,7 @@ cc_test(eigen_test SRCS eigen_test.cc DEPS tensor)
...
@@ -9,6 +9,7 @@ cc_test(eigen_test SRCS eigen_test.cc DEPS tensor)
cc_library
(
lod_tensor SRCS lod_tensor.cc DEPS ddim place tensor
)
cc_library
(
lod_tensor SRCS lod_tensor.cc DEPS ddim place tensor
)
cc_test
(
lod_tensor_test SRCS lod_tensor_test.cc DEPS lod_tensor
)
cc_test
(
lod_tensor_test SRCS lod_tensor_test.cc DEPS lod_tensor
)
nv_test
(
lod_tensor_gpu_test SRCS lod_tensor_test.cu DEPS lod_tensor
)
cc_test
(
variable_test SRCS variable_test.cc
)
cc_test
(
variable_test SRCS variable_test.cc
)
...
...
paddle/framework/lod_tensor.h
浏览文件 @
310ef226
...
@@ -18,8 +18,10 @@
...
@@ -18,8 +18,10 @@
#ifndef PADDLE_ONLY_CPU
#ifndef PADDLE_ONLY_CPU
#include <thrust/device_vector.h>
#include <thrust/device_vector.h>
#include <thrust/host_vector.h>
#include <thrust/host_vector.h>
#include <thrust/system/cuda/experimental/pinned_allocator.h>
#endif
#endif
#include <glog/logging.h>
#include "paddle/framework/ddim.h"
#include "paddle/framework/ddim.h"
#include "paddle/framework/tensor.h"
#include "paddle/framework/tensor.h"
#include "paddle/platform/enforce.h"
#include "paddle/platform/enforce.h"
...
@@ -32,7 +34,8 @@ template <typename T>
...
@@ -32,7 +34,8 @@ template <typename T>
using
Vector
=
std
::
vector
<
T
>
;
using
Vector
=
std
::
vector
<
T
>
;
#else
#else
template
<
typename
T
>
template
<
typename
T
>
using
Vector
=
thrust
::
host_vector
<
T
>
;
using
Vector
=
thrust
::
host_vector
<
T
,
thrust
::
system
::
cuda
::
experimental
::
pinned_allocator
<
T
>>
;
#endif
#endif
using
LoD
=
std
::
vector
<
Vector
<
size_t
>>
;
using
LoD
=
std
::
vector
<
Vector
<
size_t
>>
;
...
...
paddle/framework/lod_tensor_test.cu
0 → 100644
浏览文件 @
310ef226
/*
Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
#include <cuda.h>
#include <cuda_runtime.h>
#include "paddle/framework/lod_tensor.h"
#include "paddle/platform/assert.h"
#include <gtest/gtest.h>
__global__
void
test
(
size_t
*
a
,
int
size
)
{
for
(
int
i
=
blockIdx
.
x
*
blockDim
.
x
+
threadIdx
.
x
;
i
<
size
;
i
+=
blockDim
.
x
*
gridDim
.
x
)
{
a
[
i
]
*=
2
;
}
}
TEST
(
LoDTensor
,
LoDInGPU
)
{
paddle
::
framework
::
Tensor
tensor
;
paddle
::
framework
::
LoDTensor
lod_tensor
;
paddle
::
platform
::
GPUPlace
place
(
0
);
paddle
::
framework
::
LoD
src_lod
;
src_lod
.
push_back
(
std
::
vector
<
size_t
>
{
0
,
2
,
4
,
6
,
8
,
10
,
12
,
14
});
tensor
.
Resize
({
14
,
16
});
tensor
.
mutable_data
<
float
>
(
place
);
lod_tensor
.
set_lod
(
src_lod
);
lod_tensor
.
set_tensor
(
&
tensor
);
CHECK_EQ
(
lod_tensor
.
lod_element
(
0
,
2
),
4
);
CHECK_EQ
(
lod_tensor
.
lod_element
(
0
,
4
),
8
);
auto
lod
=
lod_tensor
.
lod
();
test
<<<
1
,
8
>>>
(
lod
[
0
].
data
(),
lod
[
0
].
size
());
cudaDeviceSynchronize
();
for
(
size_t
i
=
0
;
i
<
src_lod
[
0
].
size
();
++
i
)
{
CHECK_EQ
(
lod
[
0
].
data
()[
i
],
src_lod
[
0
].
data
()[
i
]
*
2
);
}
}
paddle/gserver/layers/Layer.h
浏览文件 @
310ef226
...
@@ -49,6 +49,12 @@ struct LayerState {
...
@@ -49,6 +49,12 @@ struct LayerState {
};
};
typedef
std
::
shared_ptr
<
LayerState
>
LayerStatePtr
;
typedef
std
::
shared_ptr
<
LayerState
>
LayerStatePtr
;
/// Paddle device ID, MKLDNN is -2, CPU is -1
enum
PADDLE_DEVICE_ID
{
MKLDNN_DEVICE
=
-
2
,
CPU_DEVICE
=
-
1
,
};
/**
/**
* @brief Base class for layer.
* @brief Base class for layer.
* Define necessary variables and functions for every layer.
* Define necessary variables and functions for every layer.
...
@@ -59,11 +65,6 @@ protected:
...
@@ -59,11 +65,6 @@ protected:
LayerConfig
config_
;
LayerConfig
config_
;
/// whether to use GPU
/// whether to use GPU
bool
useGpu_
;
bool
useGpu_
;
/// Paddle device ID, MKLDNN is -2, CPU is -1
enum
PADDLE_DEVICE_ID
{
MKLDNN_DEVICE
=
-
2
,
CPU_DEVICE
=
-
1
,
};
/// Device Id. MKLDNN is -2, CPU is -1, and GPU is 0, 1, 2 ...
/// Device Id. MKLDNN is -2, CPU is -1, and GPU is 0, 1, 2 ...
int
deviceId_
;
int
deviceId_
;
/// Input layers
/// Input layers
...
...
paddle/gserver/layers/MKLDNNFcLayer.cpp
浏览文件 @
310ef226
...
@@ -14,7 +14,6 @@ limitations under the License. */
...
@@ -14,7 +14,6 @@ limitations under the License. */
#include "MKLDNNFcLayer.h"
#include "MKLDNNFcLayer.h"
#include "paddle/utils/Logging.h"
#include "paddle/utils/Logging.h"
#include "paddle/utils/Stat.h"
using
namespace
mkldnn
;
// NOLINT
using
namespace
mkldnn
;
// NOLINT
typedef
memory
::
format
format
;
typedef
memory
::
format
format
;
...
@@ -40,6 +39,8 @@ bool MKLDNNFcLayer::init(const LayerMap& layerMap,
...
@@ -40,6 +39,8 @@ bool MKLDNNFcLayer::init(const LayerMap& layerMap,
oc_
=
getSize
();
oc_
=
getSize
();
oh_
=
1
;
oh_
=
1
;
ow_
=
1
;
ow_
=
1
;
ih_
=
1
;
iw_
=
1
;
// input size can not change in FC
// input size can not change in FC
iLayerSize_
=
inputLayers_
[
0
]
->
getSize
();
iLayerSize_
=
inputLayers_
[
0
]
->
getSize
();
...
@@ -77,67 +78,53 @@ void MKLDNNFcLayer::convertWeightsToPaddle() {
...
@@ -77,67 +78,53 @@ void MKLDNNFcLayer::convertWeightsToPaddle() {
wgtVal_
->
reorderDataTo
(
wgtVal_
,
dstFmt
,
targetDim
);
wgtVal_
->
reorderDataTo
(
wgtVal_
,
dstFmt
,
targetDim
);
}
}
void
MKLDNNFcLayer
::
reshape
()
{
void
MKLDNNFcLayer
::
reshape
(
const
Argument
&
input
=
getInput
(
0
,
getPrev
(
0
)
->
getDeviceId
());
int
&
bs
,
int
&
ic
,
int
&
ih
,
int
&
iw
,
int
oc
,
int
&
oh
,
int
&
ow
)
{
int
batchSize
=
input
.
getBatchSize
();
reshapeInput
(
bs
,
ih
,
iw
);
if
(
bs_
==
batchSize
)
{
return
;
}
bs_
=
batchSize
;
ih_
=
input
.
getFrameHeight
();
iw_
=
input
.
getFrameWidth
();
if
(
ih_
==
0
)
{
ih_
=
1
;
}
if
(
iw_
==
0
)
{
iw_
=
1
;
}
CHECK_EQ
(
iLayerSize_
,
inputLayers_
[
0
]
->
getSize
());
ic_
=
iLayerSize_
/
(
ih_
*
iw_
);
CHECK_EQ
(
size_t
(
ic_
*
ih_
*
iw_
),
iLayerSize_
)
<<
"not divisible"
;
CHECK_EQ
(
size_t
(
oc_
),
getSize
());
printSizeInfo
();
// reset output
CHECK_EQ
(
iLayerSize_
,
inputLayers_
[
0
]
->
getSize
());
output_
.
setFrameHeight
(
oh_
);
ic
=
iLayerSize_
/
(
ih
*
iw
);
output_
.
setFrameWidth
(
ow_
)
;
CHECK_EQ
(
size_t
(
ic
*
ih
*
iw
),
iLayerSize_
)
<<
"not divisible"
;
resetOutput
(
bs_
,
oc_
);
CHECK_EQ
(
size_t
(
oc
),
getSize
()
);
// reset mkldnn forward
reshapeOutput
(
oh
,
ow
);
resetFwd
();
resizeOutput
(
bs
,
oc
);
needResetBwd_
=
true
;
convertWeightsFromPaddle
();
printSizeInfo
();
}
}
void
MKLDNNFcLayer
::
resetFwd
()
{
void
MKLDNNFcLayer
::
resetFwd
(
std
::
vector
<
mkldnn
::
primitive
>&
pipeline
,
MKLDNNMatrixPtr
&
in
,
MKLDNNMatrixPtr
&
wgt
,
MKLDNNMatrixPtr
&
bias
,
MKLDNNMatrixPtr
&
out
)
{
pipeline
.
clear
();
bool
hasBias
=
biases_
&&
biases_
->
getW
();
bool
hasBias
=
biases_
&&
biases_
->
getW
();
const
MatrixPtr
&
wgt
=
weight_
->
getW
();
const
MatrixPtr
&
wgt
Val
=
weight_
->
getW
();
const
MatrixPtr
&
bias
=
hasBias
?
biases_
->
getW
()
:
nullptr
;
const
MatrixPtr
&
bias
Val
=
hasBias
?
biases_
->
getW
()
:
nullptr
;
const
MatrixPtr
&
out
=
output_
.
value
;
const
MatrixPtr
&
out
Val
=
output_
.
value
;
if
(
inputIsOnlyMKLDNN
())
{
if
(
inputIsOnlyMKLDNN
())
{
const
MatrixPtr
&
in
=
getInputValue
(
0
);
const
MatrixPtr
&
in
Val
=
getInputValue
(
0
);
in
Val_
=
std
::
dynamic_pointer_cast
<
MKLDNNMatrix
>
(
in
);
in
=
std
::
dynamic_pointer_cast
<
MKLDNNMatrix
>
(
inVal
);
CHECK
(
in
Val_
)
<<
"Input should be MKLDNNMatrix"
;
CHECK
(
in
)
<<
"Input should be MKLDNNMatrix"
;
}
else
{
}
else
{
CHECK_EQ
(
getPrev
(
0
)
->
getDeviceId
(),
CPU_DEVICE
)
<<
"Only support CPU yet"
;
CHECK_EQ
(
getPrev
(
0
)
->
getDeviceId
(),
CPU_DEVICE
)
<<
"Only support CPU yet"
;
const
MatrixPtr
&
in
=
getInputValue
(
0
,
CPU_DEVICE
);
const
MatrixPtr
&
in
Val
=
getInputValue
(
0
,
CPU_DEVICE
);
in
Val_
=
MKLDNNMatrix
::
create
(
in
=
MKLDNNMatrix
::
create
(
in
,
memory
::
dims
{
bs_
,
ic_
,
ih_
,
iw_
},
format
::
nchw
,
engine_
);
in
Val
,
memory
::
dims
{
bs_
,
ic_
,
ih_
,
iw_
},
format
::
nchw
,
engine_
);
}
}
in
Val_
->
downSpatial
();
in
->
downSpatial
();
wgt
Val_
=
MKLDNNMatrix
::
create
(
wgt
=
MKLDNNMatrix
::
create
(
wgt
,
memory
::
dims
{
oc_
,
ic_
,
ih_
,
iw_
},
format
::
oihw
,
engine_
);
wgt
Val
,
memory
::
dims
{
oc_
,
ic_
,
ih_
,
iw_
},
format
::
oihw
,
engine_
);
wgt
Val_
->
downSpatial
();
wgt
->
downSpatial
();
bias
Val_
=
bias
=
hasBias
?
MKLDNNMatrix
::
create
(
biasVal
,
{
oc_
},
format
::
x
,
engine_
)
hasBias
?
MKLDNNMatrix
::
create
(
bias
,
{
oc_
},
format
::
x
,
engine_
)
:
nullptr
;
:
nullptr
;
out
Val_
=
MKLDNNMatrix
::
create
(
out
,
{
bs_
,
oc_
},
format
::
nc
,
engine_
);
out
=
MKLDNNMatrix
::
create
(
outVal
,
{
bs_
,
oc_
},
format
::
nc
,
engine_
);
// change original output value to mkldnn output value
// change original output value to mkldnn output value
output_
.
value
=
std
::
dynamic_pointer_cast
<
Matrix
>
(
out
Val_
);
output_
.
value
=
std
::
dynamic_pointer_cast
<
Matrix
>
(
out
);
if
(
!
outputIsOnlyMKLDNN
())
{
if
(
!
outputIsOnlyMKLDNN
())
{
copyOutputInfoToOtherDevice
();
// fc cpu output value do not need create convert
// fc cpu output value do not need create convert
// just share point
// just share point
getOutput
(
CPU_DEVICE
).
value
->
setData
(
output_
.
value
->
getData
());
getOutput
(
CPU_DEVICE
).
value
->
setData
(
output_
.
value
->
getData
());
...
@@ -146,27 +133,31 @@ void MKLDNNFcLayer::resetFwd() {
...
@@ -146,27 +133,31 @@ void MKLDNNFcLayer::resetFwd() {
// create forward handle
// create forward handle
prop_kind
pk
=
prop_kind
::
forward
;
prop_kind
pk
=
prop_kind
::
forward
;
fc_fwd
::
desc
fwdDesc
=
hasBias
?
fc_fwd
::
desc
(
pk
,
fc_fwd
::
desc
fwdDesc
=
hasBias
?
fc_fwd
::
desc
(
pk
,
in
Val_
->
getMemoryDesc
(),
in
->
getMemoryDesc
(),
wgt
Val_
->
getMemoryDesc
(),
wgt
->
getMemoryDesc
(),
bias
Val_
->
getMemoryDesc
(),
bias
->
getMemoryDesc
(),
out
Val_
->
getMemoryDesc
())
out
->
getMemoryDesc
())
:
fc_fwd
::
desc
(
pk
,
:
fc_fwd
::
desc
(
pk
,
in
Val_
->
getMemoryDesc
(),
in
->
getMemoryDesc
(),
wgt
Val_
->
getMemoryDesc
(),
wgt
->
getMemoryDesc
(),
out
Val_
->
getMemoryDesc
());
out
->
getMemoryDesc
());
fc_fwd
::
primitive_desc
fwdPD
=
fc_fwd
::
primitive_desc
(
fwdDesc
,
engine_
);
fc_fwd
::
primitive_desc
fwdPD
=
fc_fwd
::
primitive_desc
(
fwdDesc
,
engine_
);
if
(
hasBias
)
{
if
(
hasBias
)
{
fwd_
.
reset
(
new
fc_fwd
(
fwdPD
,
*
in
Val_
,
*
wgtVal_
,
*
biasVal_
,
*
outVal_
));
fwd_
.
reset
(
new
fc_fwd
(
fwdPD
,
*
in
,
*
wgt
,
*
bias
,
*
out
));
}
else
{
}
else
{
fwd_
.
reset
(
new
fc_fwd
(
fwdPD
,
*
in
Val_
,
*
wgtVal_
,
*
outVal_
));
fwd_
.
reset
(
new
fc_fwd
(
fwdPD
,
*
in
,
*
wgt
,
*
out
));
}
}
printValueFormatFlow
();
printValueFormatFlow
();
pipelineFwd_
.
clear
();
pipeline
.
push_back
(
*
fwd_
);
pipelineFwd_
.
push_back
(
*
fwd_
);
}
}
void
MKLDNNFcLayer
::
resetBwd
()
{
void
MKLDNNFcLayer
::
resetBwd
(
std
::
vector
<
mkldnn
::
primitive
>&
pipeline
,
MKLDNNMatrixPtr
&
in
,
MKLDNNMatrixPtr
&
wgt
,
MKLDNNMatrixPtr
&
bias
,
MKLDNNMatrixPtr
&
out
)
{
pipeline
.
clear
();
if
(
!
needResetBwd_
)
{
if
(
!
needResetBwd_
)
{
return
;
return
;
}
}
...
@@ -175,8 +166,8 @@ void MKLDNNFcLayer::resetBwd() {
...
@@ -175,8 +166,8 @@ void MKLDNNFcLayer::resetBwd() {
/// backward weight
/// backward weight
CHECK
(
inVal_
)
<<
"Should have input value"
;
CHECK
(
inVal_
)
<<
"Should have input value"
;
const
MatrixPtr
&
wgt
=
weight_
->
getWGrad
();
const
MatrixPtr
&
wgt
Grad
=
weight_
->
getWGrad
();
const
MatrixPtr
&
bias
=
hasBias
?
biases_
->
getWGrad
()
:
nullptr
;
const
MatrixPtr
&
bias
Grad
=
hasBias
?
biases_
->
getWGrad
()
:
nullptr
;
// TODO(TJ): merge outgrad
// TODO(TJ): merge outgrad
int
device
=
outputIsOnlyMKLDNN
()
?
MKLDNN_DEVICE
:
CPU_DEVICE
;
int
device
=
outputIsOnlyMKLDNN
()
?
MKLDNN_DEVICE
:
CPU_DEVICE
;
...
@@ -187,107 +178,66 @@ void MKLDNNFcLayer::resetBwd() {
...
@@ -187,107 +178,66 @@ void MKLDNNFcLayer::resetBwd() {
// for CPU device:
// for CPU device:
// fc do not need to convert from cpu device since output is always nc format
// fc do not need to convert from cpu device since output is always nc format
// only need create from cpu device
// only need create from cpu device
const
MatrixPtr
&
out
=
getOutput
(
device
).
grad
;
const
MatrixPtr
&
out
Grad
=
getOutput
(
device
).
grad
;
out
Grad_
=
MKLDNNMatrix
::
create
(
out
,
outVal_
->
getPrimitiveDesc
());
out
=
MKLDNNMatrix
::
create
(
outGrad
,
outVal_
->
getPrimitiveDesc
());
wgt
Grad_
=
MKLDNNMatrix
::
create
(
wgt
,
wgtVal_
->
getPrimitiveDesc
());
wgt
=
MKLDNNMatrix
::
create
(
wgtGrad
,
wgtVal_
->
getPrimitiveDesc
());
bias
Grad_
=
hasBias
?
MKLDNNMatrix
::
create
(
bias
,
biasVal_
->
getPrimitiveDesc
())
bias
=
hasBias
?
MKLDNNMatrix
::
create
(
biasGrad
,
biasVal_
->
getPrimitiveDesc
())
:
nullptr
;
:
nullptr
;
// create memory primitive desc
// create memory primitive desc
fc_fwd
::
desc
fwdDesc
=
fc_fwd
::
desc
(
prop_kind
::
forward
,
fc_fwd
::
desc
fwdDesc
=
fc_fwd
::
desc
(
prop_kind
::
forward
,
inVal_
->
getMemoryDesc
(),
inVal_
->
getMemoryDesc
(),
wgt
Grad_
->
getMemoryDesc
(),
wgt
->
getMemoryDesc
(),
out
Grad_
->
getMemoryDesc
());
out
->
getMemoryDesc
());
fc_fwd
::
primitive_desc
fwdPD
=
fc_fwd
::
primitive_desc
(
fwdDesc
,
engine_
);
fc_fwd
::
primitive_desc
fwdPD
=
fc_fwd
::
primitive_desc
(
fwdDesc
,
engine_
);
fc_bwdWgt
::
desc
bwdWgtDesc
=
hasBias
fc_bwdWgt
::
desc
bwdWgtDesc
=
hasBias
?
fc_bwdWgt
::
desc
(
inVal_
->
getMemoryDesc
(),
?
fc_bwdWgt
::
desc
(
inVal_
->
getMemoryDesc
(),
wgt
Grad_
->
getMemoryDesc
(),
wgt
->
getMemoryDesc
(),
bias
Grad_
->
getMemoryDesc
(),
bias
->
getMemoryDesc
(),
out
Grad_
->
getMemoryDesc
())
out
->
getMemoryDesc
())
:
fc_bwdWgt
::
desc
(
inVal_
->
getMemoryDesc
(),
:
fc_bwdWgt
::
desc
(
inVal_
->
getMemoryDesc
(),
wgt
Grad_
->
getMemoryDesc
(),
wgt
->
getMemoryDesc
(),
out
Grad_
->
getMemoryDesc
());
out
->
getMemoryDesc
());
fc_bwdWgt
::
primitive_desc
bwdWgtPD
=
fc_bwdWgt
::
primitive_desc
bwdWgtPD
=
fc_bwdWgt
::
primitive_desc
(
bwdWgtDesc
,
engine_
,
fwdPD
);
fc_bwdWgt
::
primitive_desc
(
bwdWgtDesc
,
engine_
,
fwdPD
);
if
(
hasBias
)
{
if
(
hasBias
)
{
bwdWgt_
.
reset
(
bwdWgt_
.
reset
(
new
fc_bwdWgt
(
bwdWgtPD
,
*
inVal_
,
*
out
,
*
wgt
,
*
bias
));
new
fc_bwdWgt
(
bwdWgtPD
,
*
inVal_
,
*
outGrad_
,
*
wgtGrad_
,
*
biasGrad_
));
}
else
{
}
else
{
bwdWgt_
.
reset
(
new
fc_bwdWgt
(
bwdWgtPD
,
*
inVal_
,
*
out
Grad_
,
*
wgtGrad_
));
bwdWgt_
.
reset
(
new
fc_bwdWgt
(
bwdWgtPD
,
*
inVal_
,
*
out
,
*
wgt
));
}
}
pipelineBwd_
.
clear
();
pipeline
.
push_back
(
*
bwdWgt_
);
pipelineBwd_
.
push_back
(
*
bwdWgt_
);
/// backward data
/// backward data
const
MatrixPtr
&
in
=
inputLayers_
[
0
]
->
getOutput
().
grad
;
const
MatrixPtr
&
in
Grad
=
inputLayers_
[
0
]
->
getOutput
().
grad
;
if
(
in
==
nullptr
)
{
if
(
in
Grad
==
nullptr
)
{
return
;
return
;
}
}
if
(
getInput
(
0
,
MKLDNN_DEVICE
).
getAllCount
()
>
1
)
{
if
(
getInput
(
0
,
MKLDNN_DEVICE
).
getAllCount
()
>
1
)
{
// TODO(TJ): use outputMaps_ ways to get the inGrad_ when merge outgrad done
// TODO(TJ): use outputMaps_ ways to get the inGrad_ when merge outgrad done
}
else
{
}
else
{
in
Grad_
=
MKLDNNMatrix
::
create
(
in
,
inVal_
->
getPrimitiveDesc
());
in
=
MKLDNNMatrix
::
create
(
inGrad
,
inVal_
->
getPrimitiveDesc
());
}
}
fc_bwdData
::
desc
bwdDataDesc
=
fc_bwdData
::
desc
(
inVal_
->
getMemoryDesc
(),
fc_bwdData
::
desc
bwdDataDesc
=
fc_bwdData
::
desc
(
wgtGrad_
->
getMemoryDesc
(),
inVal_
->
getMemoryDesc
(),
wgt
->
getMemoryDesc
(),
out
->
getMemoryDesc
());
outGrad_
->
getMemoryDesc
());
fc_bwdData
::
primitive_desc
bwdDataPD
=
fc_bwdData
::
primitive_desc
bwdDataPD
=
fc_bwdData
::
primitive_desc
(
bwdDataDesc
,
engine_
,
fwdPD
);
fc_bwdData
::
primitive_desc
(
bwdDataDesc
,
engine_
,
fwdPD
);
CHECK
(
wgtVal_
)
<<
"Should have weight memory"
;
CHECK
(
wgtVal_
)
<<
"Should have weight memory"
;
bwdData_
.
reset
(
new
fc_bwdData
(
bwdDataPD
,
*
out
Grad_
,
*
wgtVal_
,
*
inGrad_
));
bwdData_
.
reset
(
new
fc_bwdData
(
bwdDataPD
,
*
out
,
*
wgtVal_
,
*
in
));
printGradFormatFlow
();
printGradFormatFlow
();
pipeline
Bwd_
.
push_back
(
*
bwdData_
);
pipeline
.
push_back
(
*
bwdData_
);
}
}
void
MKLDNNFcLayer
::
updateInputData
()
{
void
MKLDNNFcLayer
::
updateInputData
()
{
if
(
inputLayers_
[
0
]
->
getType
()
!=
"data"
)
{
inVal_
->
setData
(
getInputValue
(
0
,
CPU_DEVICE
)
->
getData
());
return
;
}
real
*
iData
=
getInputValue
(
0
,
CPU_DEVICE
)
->
getData
();
inVal_
->
setData
(
iData
);
}
}
void
MKLDNNFcLayer
::
forward
(
PassType
passType
)
{
void
MKLDNNFcLayer
::
updateWeights
(
const
UpdateCallback
&
callback
)
{
Layer
::
forward
(
passType
);
weight_
->
getParameterPtr
()
->
incUpdate
(
callback
);
reshape
();
if
(
biases_
&&
biases_
->
getWGrad
())
{
biases_
->
getParameterPtr
()
->
incUpdate
(
callback
);
{
REGISTER_TIMER_INFO
(
"mkldnn_FwdTimer"
,
getName
().
c_str
());
updateInputData
();
// just submit forward pipeline
stream_
->
submit
(
pipelineFwd_
);
}
/* activation */
{
REGISTER_TIMER_INFO
(
"FwActTimer"
,
getName
().
c_str
());
forwardActivation
();
}
}
void
MKLDNNFcLayer
::
backward
(
const
UpdateCallback
&
callback
)
{
/* Do derivation */
{
REGISTER_TIMER_INFO
(
"BpActTimer"
,
getName
().
c_str
());
backwardActivation
();
}
{
REGISTER_TIMER_INFO
(
"mkldnn_bwdTimer"
,
getName
().
c_str
());
resetBwd
();
// just sumbmit backward pipeline
stream_
->
submit
(
pipelineBwd_
);
}
{
REGISTER_TIMER_INFO
(
"WeightUpdate"
,
getName
().
c_str
());
weight_
->
getParameterPtr
()
->
incUpdate
(
callback
);
if
(
biases_
&&
biases_
->
getWGrad
())
{
biases_
->
getParameterPtr
()
->
incUpdate
(
callback
);
}
}
}
}
}
}
// namespace paddle
}
// namespace paddle
paddle/gserver/layers/MKLDNNFcLayer.h
浏览文件 @
310ef226
...
@@ -45,35 +45,28 @@ public:
...
@@ -45,35 +45,28 @@ public:
bool
init
(
const
LayerMap
&
layerMap
,
bool
init
(
const
LayerMap
&
layerMap
,
const
ParameterMap
&
parameterMap
)
override
;
const
ParameterMap
&
parameterMap
)
override
;
void
convertWeightsFromPaddle
()
override
;
void
reshape
(
int
&
bs
,
int
&
ic
,
int
&
ih
,
int
&
iw
,
int
oc
,
int
&
oh
,
int
&
ow
)
override
;
void
convertWeightsToPaddle
()
override
;
void
forward
(
PassType
passType
)
override
;
void
resetFwd
(
std
::
vector
<
mkldnn
::
primitive
>&
pipeline
,
MKLDNNMatrixPtr
&
in
,
MKLDNNMatrixPtr
&
wgt
,
MKLDNNMatrixPtr
&
bias
,
MKLDNNMatrixPtr
&
out
)
override
;
void
backward
(
const
UpdateCallback
&
callback
)
override
;
void
resetBwd
(
std
::
vector
<
mkldnn
::
primitive
>&
pipeline
,
MKLDNNMatrixPtr
&
in
,
MKLDNNMatrixPtr
&
wgt
,
MKLDNNMatrixPtr
&
bias
,
MKLDNNMatrixPtr
&
out
)
override
;
void
updateInputData
()
override
;
void
updateInputData
()
override
;
protected:
void
updateWeights
(
const
UpdateCallback
&
callback
)
override
;
/**
* reshape the input image sizes
void
convertWeightsFromPaddle
()
override
;
* and reset output buffer size
* and reset mkldnn forward
void
convertWeightsToPaddle
()
override
;
*/
void
reshape
();
/**
* reset the forward primitve and memory
* only would be called when input size changes
*/
void
resetFwd
();
/**
* reset the backward primitve and memory for mkldnn fc
* only would be called when needed
*/
void
resetBwd
();
};
};
}
// namespace paddle
}
// namespace paddle
paddle/gserver/layers/MKLDNNLayer.h
浏览文件 @
310ef226
...
@@ -19,6 +19,7 @@ limitations under the License. */
...
@@ -19,6 +19,7 @@ limitations under the License. */
#include "MKLDNNBase.h"
#include "MKLDNNBase.h"
#include "mkldnn.hpp"
#include "mkldnn.hpp"
#include "paddle/math/MKLDNNMatrix.h"
#include "paddle/math/MKLDNNMatrix.h"
#include "paddle/utils/Stat.h"
DECLARE_bool
(
use_mkldnn
);
DECLARE_bool
(
use_mkldnn
);
...
@@ -33,6 +34,8 @@ typedef std::shared_ptr<MKLDNNLayer> MKLDNNLayerPtr;
...
@@ -33,6 +34,8 @@ typedef std::shared_ptr<MKLDNNLayer> MKLDNNLayerPtr;
*/
*/
class
MKLDNNLayer
:
public
Layer
{
class
MKLDNNLayer
:
public
Layer
{
protected:
protected:
// input value element count
size_t
inputElemenCnt_
;
// batch size
// batch size
int
bs_
;
int
bs_
;
// input image channel, height and width
// input image channel, height and width
...
@@ -52,7 +55,7 @@ protected:
...
@@ -52,7 +55,7 @@ protected:
std
::
vector
<
mkldnn
::
primitive
>
pipelineFwd_
;
std
::
vector
<
mkldnn
::
primitive
>
pipelineFwd_
;
std
::
vector
<
mkldnn
::
primitive
>
pipelineBwd_
;
std
::
vector
<
mkldnn
::
primitive
>
pipelineBwd_
;
// MKLDNNMatrixPtr
// MKLDNNMatrixPtr
with internal format
MKLDNNMatrixPtr
inVal_
;
MKLDNNMatrixPtr
inVal_
;
MKLDNNMatrixPtr
inGrad_
;
MKLDNNMatrixPtr
inGrad_
;
MKLDNNMatrixPtr
outVal_
;
MKLDNNMatrixPtr
outVal_
;
...
@@ -65,6 +68,7 @@ protected:
...
@@ -65,6 +68,7 @@ protected:
public:
public:
explicit
MKLDNNLayer
(
const
LayerConfig
&
config
)
explicit
MKLDNNLayer
(
const
LayerConfig
&
config
)
:
Layer
(
config
),
:
Layer
(
config
),
inputElemenCnt_
(
0
),
bs_
(
0
),
bs_
(
0
),
ic_
(
0
),
ic_
(
0
),
ih_
(
0
),
ih_
(
0
),
...
@@ -95,12 +99,104 @@ public:
...
@@ -95,12 +99,104 @@ public:
if
(
!
Layer
::
init
(
layerMap
,
parameterMap
))
{
if
(
!
Layer
::
init
(
layerMap
,
parameterMap
))
{
return
false
;
return
false
;
}
}
checkCPUOutputsNumber
();
stream_
.
reset
(
new
MKLDNNStream
());
stream_
.
reset
(
new
MKLDNNStream
());
engine_
=
CPUEngine
::
Instance
().
getEngine
();
engine_
=
CPUEngine
::
Instance
().
getEngine
();
return
true
;
return
true
;
}
}
void
forward
(
PassType
passType
)
override
{
passType_
=
passType
;
{
REGISTER_TIMER_INFO
(
"mkldnn_FwdTimer"
,
getName
().
c_str
());
CHECK
(
!
inputLayers_
.
empty
());
copySeqInfoToOutputs
();
size_t
elemenCnt
=
inputLayers_
[
0
]
->
getOutput
().
value
->
getElementCnt
();
if
(
inputElemenCnt_
!=
elemenCnt
)
{
// reset when input total sizes changed, not only the batchsize
inputElemenCnt_
=
elemenCnt
;
reshape
(
bs_
,
ic_
,
ih_
,
iw_
,
oc_
,
oh_
,
ow_
);
resetFwd
(
pipelineFwd_
,
inVal_
,
wgtVal_
,
biasVal_
,
outVal_
);
convertWeightsFromPaddle
();
needResetBwd_
=
true
;
}
if
(
inputLayers_
[
0
]
->
getType
()
==
"data"
)
{
updateInputData
();
}
stream_
->
submit
(
pipelineFwd_
);
}
/* activation */
{
REGISTER_TIMER_INFO
(
"FwActTimer"
,
getName
().
c_str
());
forwardActivation
();
}
}
void
backward
(
const
UpdateCallback
&
callback
)
override
{
/* Do derivation */
{
REGISTER_TIMER_INFO
(
"BpActTimer"
,
getName
().
c_str
());
backwardActivation
();
}
{
REGISTER_TIMER_INFO
(
"mkldnn_bwdTimer"
,
getName
().
c_str
());
if
(
needResetBwd_
)
{
resetBwd
(
pipelineBwd_
,
inGrad_
,
wgtGrad_
,
biasGrad_
,
outGrad_
);
needResetBwd_
=
false
;
}
stream_
->
submit
(
pipelineBwd_
);
}
{
REGISTER_TIMER_INFO
(
"WeightUpdate"
,
getName
().
c_str
());
updateWeights
(
callback
);
}
}
/**
* reshape the input image sizes
* and reset output image and buffer size
* output channel can not be changed
*/
virtual
void
reshape
(
int
&
bs
,
int
&
ic
,
int
&
ih
,
int
&
iw
,
int
oc
,
int
&
oh
,
int
&
ow
)
=
0
;
/**
* reset the mkldnn forward primitve and memory
* only would be called when input size changes
*/
virtual
void
resetFwd
(
std
::
vector
<
mkldnn
::
primitive
>&
pipeline
,
MKLDNNMatrixPtr
&
in
,
MKLDNNMatrixPtr
&
wgt
,
MKLDNNMatrixPtr
&
bias
,
MKLDNNMatrixPtr
&
out
)
=
0
;
/**
* reset the mkldnn backward primitve and memory for mkldnn fc
* only would be called when needed
*/
virtual
void
resetBwd
(
std
::
vector
<
mkldnn
::
primitive
>&
pipeline
,
MKLDNNMatrixPtr
&
in
,
MKLDNNMatrixPtr
&
wgt
,
MKLDNNMatrixPtr
&
bias
,
MKLDNNMatrixPtr
&
out
)
=
0
;
/**
* Update input value data when input layer is "data" type.
* Since the input value data address might be changed.
*/
virtual
void
updateInputData
()
{}
/**
* Update weights and biases if necessary.
*/
virtual
void
updateWeights
(
const
UpdateCallback
&
callback
)
{}
/**
/**
* convert weight from paddle format to mkldnn format
* convert weight from paddle format to mkldnn format
* weight_ will be override
* weight_ will be override
...
@@ -114,10 +210,38 @@ public:
...
@@ -114,10 +210,38 @@ public:
virtual
void
convertWeightsToPaddle
()
{}
virtual
void
convertWeightsToPaddle
()
{}
/**
/**
* Update input value data when input layer is "data" type.
* add this interface as public for unit test
* Since the input value data address might be changed.
*/
*/
virtual
void
updateInputData
()
{}
void
addOutputArgument
(
int
deviceId
)
{
Layer
::
addOutputArgument
(
deviceId
);
}
protected:
/**
* reshape the input image sizes and input batchsize
*/
virtual
void
reshapeInput
(
int
&
batchsize
,
int
&
height
,
int
&
width
)
{
const
Argument
&
input
=
inputLayers_
[
0
]
->
getOutput
();
batchsize
=
input
.
getBatchSize
();
int
h
=
input
.
getFrameHeight
();
int
w
=
input
.
getFrameWidth
();
if
(
h
!=
0
)
{
height
=
h
;
}
if
(
w
!=
0
)
{
width
=
w
;
}
}
/**
* reshape output image sizes
*/
virtual
void
reshapeOutput
(
size_t
height
,
size_t
width
)
{
output_
.
setFrameHeight
(
height
);
output_
.
setFrameWidth
(
width
);
for
(
size_t
i
=
0
;
i
<
outputOtherDevice_
.
size
();
i
++
)
{
outputOtherDevice_
[
i
].
setFrameHeight
(
height
);
outputOtherDevice_
[
i
].
setFrameWidth
(
width
);
}
}
/**
/**
* print info about sizes
* print info about sizes
...
@@ -133,8 +257,8 @@ public:
...
@@ -133,8 +257,8 @@ public:
*/
*/
virtual
void
printValueFormatFlow
()
{
virtual
void
printValueFormatFlow
()
{
if
(
inVal_
&&
outVal_
)
{
if
(
inVal_
&&
outVal_
)
{
VLOG
(
MKLDNN_FMTS
)
<<
"value format flow --- "
<<
inVal_
->
getFormat
()
VLOG
(
MKLDNN_FMTS
)
<<
inVal_
->
getFormat
()
<<
" >>> "
<<
" >>> "
<<
outVal_
->
getFormat
();
<<
outVal_
->
getFormat
();
}
}
}
}
...
@@ -143,36 +267,12 @@ public:
...
@@ -143,36 +267,12 @@ public:
*/
*/
virtual
void
printGradFormatFlow
()
{
virtual
void
printGradFormatFlow
()
{
if
(
inGrad_
&&
outGrad_
)
{
if
(
inGrad_
&&
outGrad_
)
{
VLOG
(
MKLDNN_FMTS
)
<<
"grad format flow --- "
<<
inGrad_
->
getFormat
()
VLOG
(
MKLDNN_FMTS
)
<<
inGrad_
->
getFormat
()
<<
" <<< "
<<
" <<< "
<<
outGrad_
->
getFormat
();
<<
outGrad_
->
getFormat
();
}
}
}
}
protected:
protected:
/**
* copy image size and sequence info to other device
* @note: can not directly use Layer::copyOutputToOtherDevice since here only
* copy base info and do not copy data value
*/
void
copyOutputInfoToOtherDevice
()
{
int
cnt
=
0
;
for
(
size_t
i
=
0
;
i
<
outputOtherDevice_
.
size
();
i
++
)
{
outputOtherDevice_
[
i
].
setFrameHeight
(
output_
.
getFrameHeight
());
outputOtherDevice_
[
i
].
setFrameWidth
(
output_
.
getFrameWidth
());
outputOtherDevice_
[
i
].
sequenceStartPositions
=
output_
.
sequenceStartPositions
;
outputOtherDevice_
[
i
].
subSequenceStartPositions
=
output_
.
subSequenceStartPositions
;
outputOtherDevice_
[
i
].
cpuSequenceDims
=
output_
.
cpuSequenceDims
;
if
(
outputOtherDevice_
[
i
].
deviceId
==
CPU_DEVICE
)
{
++
cnt
;
}
}
if
(
cnt
>
1
)
{
LOG
(
WARNING
)
<<
"should not have more than one CPU devie"
;
}
}
/**
/**
* If input only has MKLDNN device.
* If input only has MKLDNN device.
* Otherwise, only support the previous layer using CPU device.
* Otherwise, only support the previous layer using CPU device.
...
@@ -205,6 +305,7 @@ protected:
...
@@ -205,6 +305,7 @@ protected:
*/
*/
void
setDevice
(
int
id
)
{
deviceId_
=
id
;
}
void
setDevice
(
int
id
)
{
deviceId_
=
id
;
}
private:
/**
/**
* Set deviceId of the params used in this layer.
* Set deviceId of the params used in this layer.
*/
*/
...
@@ -228,6 +329,42 @@ protected:
...
@@ -228,6 +329,42 @@ protected:
parameter
->
setDevice
(
id
);
parameter
->
setDevice
(
id
);
}
}
}
}
/**
* Check the cpu device number of outputOtherDevice_.
* should have only one at most.
*/
void
checkCPUOutputsNumber
(
int
max
=
1
)
{
int
cnt
=
0
;
for
(
size_t
i
=
0
;
i
<
outputOtherDevice_
.
size
();
i
++
)
{
if
(
outputOtherDevice_
[
i
].
deviceId
==
CPU_DEVICE
)
{
++
cnt
;
}
}
CHECK_LE
(
cnt
,
max
)
<<
"too much CPU devies"
;
}
/**
* copy SeqInfo from input layer to this output and other output devices.
* @note: do not use getInput(0) since it used this deviceId_,
* use "inputLayers_[0]->getOutput()" instead.
*/
void
copySeqInfoToOutputs
()
{
if
(
inputLayers_
.
empty
()
||
!
needSequenceInfo_
)
{
return
;
}
const
Argument
&
input
=
inputLayers_
[
0
]
->
getOutput
();
output_
.
sequenceStartPositions
=
input
.
sequenceStartPositions
;
output_
.
subSequenceStartPositions
=
input
.
subSequenceStartPositions
;
output_
.
cpuSequenceDims
=
input
.
cpuSequenceDims
;
for
(
size_t
i
=
0
;
i
<
outputOtherDevice_
.
size
();
i
++
)
{
outputOtherDevice_
[
i
].
sequenceStartPositions
=
output_
.
sequenceStartPositions
;
outputOtherDevice_
[
i
].
subSequenceStartPositions
=
output_
.
subSequenceStartPositions
;
outputOtherDevice_
[
i
].
cpuSequenceDims
=
output_
.
cpuSequenceDims
;
}
}
};
};
}
// namespace paddle
}
// namespace paddle
paddle/gserver/tests/MKLDNNTester.cpp
浏览文件 @
310ef226
...
@@ -63,8 +63,12 @@ void MKLDNNTester::reset(const TestConfig& dnn,
...
@@ -63,8 +63,12 @@ void MKLDNNTester::reset(const TestConfig& dnn,
initTestLayer
(
initTestLayer
(
configs_
[
i
],
&
(
layerMaps_
[
i
]),
&
(
parameters_
[
i
]),
&
(
testLayers_
[
i
]));
configs_
[
i
],
&
(
layerMaps_
[
i
]),
&
(
parameters_
[
i
]),
&
(
testLayers_
[
i
]));
}
}
dnnLayer_
=
testLayers_
[
DNN
];
refLayer_
=
testLayers_
[
REF
];
refLayer_
=
testLayers_
[
REF
];
dnnLayer_
=
std
::
dynamic_pointer_cast
<
MKLDNNLayer
>
(
testLayers_
[
DNN
]);
CHECK
(
dnnLayer_
);
// for comparison with Paddle reference results,
// need manually add cpu device output for test
dnnLayer_
->
addOutputArgument
(
CPU_DEVICE
);
EXPECT_EQ
(
dataLayers_
[
DNN
].
size
(),
dataLayers_
[
REF
].
size
());
EXPECT_EQ
(
dataLayers_
[
DNN
].
size
(),
dataLayers_
[
REF
].
size
());
EXPECT_EQ
(
parameters_
[
DNN
].
size
(),
parameters_
[
REF
].
size
());
EXPECT_EQ
(
parameters_
[
DNN
].
size
(),
parameters_
[
REF
].
size
());
...
@@ -109,20 +113,22 @@ void MKLDNNTester::randomBotDatas() {
...
@@ -109,20 +113,22 @@ void MKLDNNTester::randomBotDatas() {
void
MKLDNNTester
::
randomTopDiffs
()
{
void
MKLDNNTester
::
randomTopDiffs
()
{
refLayer_
->
getOutputGrad
()
->
randomizeUniform
();
refLayer_
->
getOutputGrad
()
->
randomizeUniform
();
dnnLayer_
->
getOutputGrad
()
->
copyFrom
(
*
(
refLayer_
->
getOutputGrad
()));
dnnLayer_
->
getOutput
(
CPU_DEVICE
)
VLOG
(
lvl_
)
<<
"Random dom Backward Input, TopDiff: "
;
.
grad
->
copyFrom
(
*
(
refLayer_
->
getOutputGrad
()));
VLOG
(
lvl_
)
<<
"Random Backward Input, TopDiff: "
;
printMatrix
(
refLayer_
->
getOutputGrad
());
printMatrix
(
refLayer_
->
getOutputGrad
());
}
}
void
MKLDNNTester
::
checkForward
()
{
void
MKLDNNTester
::
checkForward
()
{
printTopDatas
();
double
delta
=
compareMatrix
(
testLayers_
[
DNN
]
->
getOutputValue
(),
testLayers_
[
REF
]
->
getOutputValue
());
VLOG
(
MKLDNN_ALL
)
<<
"Check Forward"
;
VLOG
(
MKLDNN_ALL
)
<<
"Check Forward"
;
printTopDatas
();
double
delta
=
compareMatrix
(
dnnLayer_
->
getOutput
(
-
1
).
value
,
refLayer_
->
getOutputValue
());
EXPECT_LE
(
fabs
(
delta
),
eps_
);
EXPECT_LE
(
fabs
(
delta
),
eps_
);
}
}
void
MKLDNNTester
::
checkBackwardData
()
{
void
MKLDNNTester
::
checkBackwardData
()
{
VLOG
(
MKLDNN_ALL
)
<<
"Check Backward Data"
;
// TODO(TJ): uncomment me when batch norm ready
// TODO(TJ): uncomment me when batch norm ready
// const bool isBN = dnnLayer_->getType() == "mkldnn_batch_norm";
// const bool isBN = dnnLayer_->getType() == "mkldnn_batch_norm";
for
(
size_t
i
=
0
;
i
<
dataLayers_
[
DNN
].
size
();
++
i
)
{
for
(
size_t
i
=
0
;
i
<
dataLayers_
[
DNN
].
size
();
++
i
)
{
...
@@ -144,14 +150,12 @@ void MKLDNNTester::checkBackwardData() {
...
@@ -144,14 +150,12 @@ void MKLDNNTester::checkBackwardData() {
}
}
void
MKLDNNTester
::
checkBackwardWgts
()
{
void
MKLDNNTester
::
checkBackwardWgts
()
{
VLOG
(
MKLDNN_ALL
)
<<
"Check Backward Weight"
;
CHECK_EQ
(
parameters_
[
DNN
].
size
(),
parameters_
[
REF
].
size
());
CHECK_EQ
(
parameters_
[
DNN
].
size
(),
parameters_
[
REF
].
size
());
vector
<
VectorPtr
>
dnnWgts
;
// used to temply save mkldnn weights
vector
<
VectorPtr
>
dnnWgts
;
// used to temply save mkldnn weights
saveWgt
(
parameters_
[
DNN
],
dnnWgts
);
saveWgt
(
parameters_
[
DNN
],
dnnWgts
);
const
MKLDNNLayerPtr
dnnlayer
=
dnnLayer_
->
convertWeightsToPaddle
();
std
::
dynamic_pointer_cast
<
MKLDNNLayer
>
(
dnnLayer_
);
CHECK
(
dnnlayer
);
dnnlayer
->
convertWeightsToPaddle
();
for
(
size_t
i
=
0
;
i
<
parameters_
[
DNN
].
size
();
++
i
)
{
for
(
size_t
i
=
0
;
i
<
parameters_
[
DNN
].
size
();
++
i
)
{
const
VectorPtr
&
dnn
=
parameters_
[
DNN
][
i
]
->
getBuf
(
PARAMETER_VALUE
);
const
VectorPtr
&
dnn
=
parameters_
[
DNN
][
i
]
->
getBuf
(
PARAMETER_VALUE
);
const
VectorPtr
&
ref
=
parameters_
[
REF
][
i
]
->
getBuf
(
PARAMETER_VALUE
);
const
VectorPtr
&
ref
=
parameters_
[
REF
][
i
]
->
getBuf
(
PARAMETER_VALUE
);
...
@@ -189,38 +193,38 @@ void MKLDNNTester::restoreWgt(const vector<VectorPtr>& from,
...
@@ -189,38 +193,38 @@ void MKLDNNTester::restoreWgt(const vector<VectorPtr>& from,
}
}
// clear parameters grad
// clear parameters grad
void
MKLDNNTester
::
clearWgtDiffs
()
{
void
MKLDNNTester
::
clearWgtDiffs
(
size_t
id
)
{
CHECK_LE
(
id
,
parameters_
.
size
());
for
(
size_t
n
=
0
;
n
<
parameters_
.
size
();
++
n
)
{
for
(
size_t
n
=
0
;
n
<
parameters_
.
size
();
++
n
)
{
for
(
size_t
i
=
0
;
i
<
parameters_
[
n
].
size
();
++
i
)
{
if
(
id
==
n
||
id
==
parameters_
.
size
())
{
const
VectorPtr
&
grad
=
parameters_
[
n
][
i
]
->
getBuf
(
PARAMETER_GRADIENT
);
for
(
size_t
i
=
0
;
i
<
parameters_
[
n
].
size
();
++
i
)
{
if
(
grad
)
{
const
VectorPtr
&
grad
=
parameters_
[
n
][
i
]
->
getBuf
(
PARAMETER_GRADIENT
);
grad
->
zeroMem
();
if
(
grad
)
{
grad
->
zeroMem
();
}
}
}
}
}
}
}
}
}
void
MKLDNNTester
::
clearBotDiffs
()
{
void
MKLDNNTester
::
clearBotDiffs
(
size_t
id
)
{
// dnn and ref
CHECK_LE
(
id
,
dataLayers_
.
size
());
for
(
size_t
n
=
0
;
n
<
dataLayers_
.
size
();
++
n
)
{
for
(
size_t
n
=
0
;
n
<
dataLayers_
.
size
();
++
n
)
{
// all inputs layers
if
(
id
==
n
||
id
==
dataLayers_
.
size
())
{
for
(
size_t
i
=
0
;
i
<
dataLayers_
[
n
].
size
();
++
i
)
{
// clear inputs layers of this specific layer
dataLayers_
[
n
][
i
]
->
getOutputGrad
()
->
zeroMem
();
for
(
size_t
i
=
0
;
i
<
dataLayers_
[
n
].
size
();
++
i
)
{
dataLayers_
[
n
][
i
]
->
getOutputGrad
()
->
zeroMem
();
}
}
}
}
}
}
}
void
MKLDNNTester
::
clearBotDiffs
(
int
n
)
{
void
MKLDNNTester
::
clearTopDatas
(
size_t
id
)
{
CHECK_LT
(
n
,
NUM
);
CHECK_LE
(
id
,
testLayers_
.
size
());
// all inputs layers
for
(
size_t
i
=
0
;
i
<
dataLayers_
[
n
].
size
();
++
i
)
{
dataLayers_
[
n
][
i
]
->
getOutputGrad
()
->
zeroMem
();
}
}
void
MKLDNNTester
::
clearTopDatas
()
{
for
(
size_t
i
=
0
;
i
<
testLayers_
.
size
();
++
i
)
{
for
(
size_t
i
=
0
;
i
<
testLayers_
.
size
();
++
i
)
{
testLayers_
[
i
]
->
getOutputValue
()
->
zeroMem
();
if
(
id
==
i
||
id
==
testLayers_
.
size
())
{
testLayers_
[
i
]
->
getOutputValue
()
->
zeroMem
();
}
}
}
}
}
...
@@ -300,16 +304,24 @@ void MKLDNNTester::runOnce() {
...
@@ -300,16 +304,24 @@ void MKLDNNTester::runOnce() {
checkForward
();
checkForward
();
// test backward
// test backward
// simple updater
UpdateCallback
updateCallback
=
[](
Parameter
*
para
)
{
auto
&
grad
=
para
->
getBuf
(
PARAMETER_GRADIENT
);
auto
&
value
=
para
->
getBuf
(
PARAMETER_VALUE
);
real
lr
=
1e-3
;
value
->
add
(
*
grad
,
lr
);
};
randomTopDiffs
();
randomTopDiffs
();
dnnLayer_
->
backward
(
nullptr
);
dnnLayer_
->
backward
(
updateCallback
);
refLayer_
->
backward
(
nullptr
);
refLayer_
->
backward
(
updateCallback
);
checkBackwardData
();
checkBackwardData
();
checkBackwardWgts
();
checkBackwardWgts
();
// clear buffers
// clear buffers
// ref code will addto the diff, dnn code will writeto it
// ref code will addto the diff, dnn code will writeto it
// and clearTopDatas(
) and clearWgtDiffs() should be coverd by test
layers
// and clearTopDatas(
REF) should be coverd by ref
layers
clearBotDiffs
(
REF
);
clearBotDiffs
(
REF
);
clearWgtDiffs
(
REF
);
}
}
void
MKLDNNTester
::
run
(
const
TestConfig
&
dnn
,
void
MKLDNNTester
::
run
(
const
TestConfig
&
dnn
,
...
...
paddle/gserver/tests/MKLDNNTester.h
浏览文件 @
310ef226
...
@@ -18,6 +18,7 @@ limitations under the License. */
...
@@ -18,6 +18,7 @@ limitations under the License. */
#include <vector>
#include <vector>
#include "LayerGradUtil.h"
#include "LayerGradUtil.h"
#include "paddle/gserver/layers/MKLDNNBase.h"
#include "paddle/gserver/layers/MKLDNNBase.h"
#include "paddle/gserver/layers/MKLDNNLayer.h"
namespace
paddle
{
namespace
paddle
{
...
@@ -40,7 +41,8 @@ protected:
...
@@ -40,7 +41,8 @@ protected:
vector
<
LayerMap
>
layerMaps_
;
vector
<
LayerMap
>
layerMaps_
;
vector
<
vector
<
ParameterPtr
>>
parameters_
;
vector
<
vector
<
ParameterPtr
>>
parameters_
;
vector
<
LayerPtr
>
testLayers_
;
vector
<
LayerPtr
>
testLayers_
;
LayerPtr
dnnLayer_
,
refLayer_
;
LayerPtr
refLayer_
;
MKLDNNLayerPtr
dnnLayer_
;
/// run some iterations, all the result should pass
/// run some iterations, all the result should pass
size_t
iter_
;
size_t
iter_
;
...
@@ -88,10 +90,10 @@ private:
...
@@ -88,10 +90,10 @@ private:
void
checkBackwardData
();
void
checkBackwardData
();
void
checkBackwardWgts
();
void
checkBackwardWgts
();
void
clearWgtDiffs
();
// clear specific layer, clear all when id equals NUM
void
clear
BotDiffs
(
);
void
clear
WgtDiffs
(
size_t
id
=
NUM
);
void
clearBotDiffs
(
int
n
);
// clear specific layer
void
clearBotDiffs
(
size_t
id
=
NUM
);
void
clearTopDatas
();
void
clearTopDatas
(
size_t
id
=
NUM
);
void
printTopDatas
();
void
printTopDatas
();
void
printMatrix
(
const
MatrixPtr
&
m
);
void
printMatrix
(
const
MatrixPtr
&
m
);
...
...
paddle/operators/math/im2col_test.cc
浏览文件 @
310ef226
...
@@ -119,4 +119,4 @@ TEST(math, im2col) {
...
@@ -119,4 +119,4 @@ TEST(math, im2col) {
#ifndef PADDLE_ONLY_CPU
#ifndef PADDLE_ONLY_CPU
testIm2col
<
paddle
::
platform
::
GPUPlace
>
();
testIm2col
<
paddle
::
platform
::
GPUPlace
>
();
#endif
#endif
}
}
\ No newline at end of file
paddle/pybind/pybind.cc
浏览文件 @
310ef226
...
@@ -17,6 +17,7 @@ limitations under the License. */
...
@@ -17,6 +17,7 @@ limitations under the License. */
#include <vector>
#include <vector>
#include "paddle/framework/backward.h"
#include "paddle/framework/backward.h"
#include "paddle/framework/lod_tensor.h"
#include "paddle/framework/op_registry.h"
#include "paddle/framework/op_registry.h"
#include "paddle/operators/net_op.h"
#include "paddle/operators/net_op.h"
#include "paddle/operators/recurrent_op.h"
#include "paddle/operators/recurrent_op.h"
...
@@ -58,6 +59,8 @@ namespace paddle {
...
@@ -58,6 +59,8 @@ namespace paddle {
namespace
framework
{
namespace
framework
{
using
Tensor
=
framework
::
Tensor
;
using
Tensor
=
framework
::
Tensor
;
using
LoDTensor
=
framework
::
LoDTensor
;
using
LoD
=
framework
::
LoD
;
static
size_t
UniqueIntegerGenerator
()
{
static
size_t
UniqueIntegerGenerator
()
{
static
std
::
atomic
<
size_t
>
generator
;
static
std
::
atomic
<
size_t
>
generator
;
...
@@ -117,6 +120,60 @@ PYBIND11_PLUGIN(core) {
...
@@ -117,6 +120,60 @@ PYBIND11_PLUGIN(core) {
return
self
.
data
<
float
>
()[
offset
];
return
self
.
data
<
float
>
()[
offset
];
});
});
py
::
class_
<
LoDTensor
>
(
m
,
"LoDTensor"
,
R"DOC(LoD(Leval of Ddetails) Tensor.
The tensor and LoD info should be created before creating the LoDTensor, then
call the set_tensor and set_lod functions to set them.
)DOC"
)
.
def
(
"__init__"
,
[](
LoDTensor
&
instance
,
const
std
::
vector
<
std
::
vector
<
size_t
>>
&
lod
,
Tensor
*
t
)
{
#ifdef PADDLE_ONLY_CPU
new
(
&
instance
)
LoDTensor
(
lod
,
t
);
#else
paddle
::
framework
::
LoD
new_lod
;
new_lod
.
reserve
(
lod
.
size
());
std
::
copy
(
lod
.
begin
(),
lod
.
end
(),
std
::
back_inserter
(
new_lod
));
new
(
&
instance
)
LoDTensor
(
new_lod
,
t
);
#endif
})
.
def
(
"set_tensor"
,
[](
LoDTensor
&
self
,
Tensor
*
tensor
)
{
self
.
set_tensor
(
tensor
);
})
.
def
(
"set_lod"
,
[](
LoDTensor
&
self
,
const
std
::
vector
<
std
::
vector
<
size_t
>>
&
lod
)
{
#ifdef PADDLE_ONLY_CPU
self
.
set_lod
(
lod
);
#else
paddle
::
framework
::
LoD
new_lod
;
new_lod
.
reserve
(
lod
.
size
());
std
::
copy
(
lod
.
begin
(),
lod
.
end
(),
std
::
back_inserter
(
new_lod
));
self
.
set_lod
(
new_lod
);
#endif
})
.
def
(
"tensor"
,
[](
LoDTensor
&
self
)
->
Tensor
&
{
return
self
.
tensor
();
},
py
::
return_value_policy
::
reference
)
.
def
(
"lod"
,
[](
LoDTensor
&
self
)
->
std
::
vector
<
std
::
vector
<
size_t
>>
{
#ifdef PADDLE_ONLY_CPU
return
self
.
lod
();
#else
auto
lod
=
self
.
lod
();
std
::
vector
<
std
::
vector
<
size_t
>>
new_lod
;
new_lod
.
reserve
(
lod
.
size
());
std
::
transform
(
lod
.
begin
(),
lod
.
end
(),
std
::
back_inserter
(
new_lod
),
[](
paddle
::
framework
::
Vector
<
size_t
>
item
)
->
std
::
vector
<
size_t
>
{
std
::
vector
<
size_t
>
v
;
v
.
reserve
(
item
.
size
());
std
::
copy
(
item
.
begin
(),
item
.
end
(),
std
::
back_inserter
(
v
));
return
v
;
});
return
new_lod
;
#endif
});
py
::
class_
<
Variable
>
(
m
,
"Variable"
,
R"DOC(Variable Class.
py
::
class_
<
Variable
>
(
m
,
"Variable"
,
R"DOC(Variable Class.
All parameter, weight, gradient are variables in Paddle.
All parameter, weight, gradient are variables in Paddle.
...
@@ -128,6 +185,11 @@ All parameter, weight, gradient are variables in Paddle.
...
@@ -128,6 +185,11 @@ All parameter, weight, gradient are variables in Paddle.
.
def
(
"get_tensor"
,
.
def
(
"get_tensor"
,
[](
Variable
&
self
)
->
Tensor
*
{
return
self
.
GetMutable
<
Tensor
>
();
},
[](
Variable
&
self
)
->
Tensor
*
{
return
self
.
GetMutable
<
Tensor
>
();
},
py
::
return_value_policy
::
reference
)
py
::
return_value_policy
::
reference
)
.
def
(
"get_lod_tensor"
,
[](
Variable
&
self
)
->
LoDTensor
*
{
return
self
.
GetMutable
<
LoDTensor
>
();
},
py
::
return_value_policy
::
reference
)
.
def
(
"get_net"
,
.
def
(
"get_net"
,
[](
Variable
&
self
)
->
operators
::
NetOp
*
{
[](
Variable
&
self
)
->
operators
::
NetOp
*
{
return
self
.
GetMutable
<
operators
::
NetOp
>
();
return
self
.
GetMutable
<
operators
::
NetOp
>
();
...
...
python/paddle/v2/framework/tests/test_tensor.py
浏览文件 @
310ef226
...
@@ -3,7 +3,7 @@ import unittest
...
@@ -3,7 +3,7 @@ import unittest
import
numpy
import
numpy
class
Test
Scope
(
unittest
.
TestCase
):
class
Test
Tensor
(
unittest
.
TestCase
):
def
test_int_tensor
(
self
):
def
test_int_tensor
(
self
):
scope
=
core
.
Scope
()
scope
=
core
.
Scope
()
var
=
scope
.
new_var
(
"test_tensor"
)
var
=
scope
.
new_var
(
"test_tensor"
)
...
@@ -20,8 +20,8 @@ class TestScope(unittest.TestCase):
...
@@ -20,8 +20,8 @@ class TestScope(unittest.TestCase):
tensor
.
set
(
tensor_array
,
place
)
tensor
.
set
(
tensor_array
,
place
)
tensor_array_2
=
numpy
.
array
(
tensor
)
tensor_array_2
=
numpy
.
array
(
tensor
)
self
.
assertEqual
(
1
.0
,
tensor_array_2
[
3
,
9
])
self
.
assertEqual
(
1
,
tensor_array_2
[
3
,
9
])
self
.
assertEqual
(
2
.0
,
tensor_array_2
[
19
,
11
])
self
.
assertEqual
(
2
,
tensor_array_2
[
19
,
11
])
def
test_float_tensor
(
self
):
def
test_float_tensor
(
self
):
scope
=
core
.
Scope
()
scope
=
core
.
Scope
()
...
@@ -43,6 +43,84 @@ class TestScope(unittest.TestCase):
...
@@ -43,6 +43,84 @@ class TestScope(unittest.TestCase):
self
.
assertAlmostEqual
(
1.0
,
tensor_array_2
[
3
,
9
])
self
.
assertAlmostEqual
(
1.0
,
tensor_array_2
[
3
,
9
])
self
.
assertAlmostEqual
(
2.0
,
tensor_array_2
[
19
,
11
])
self
.
assertAlmostEqual
(
2.0
,
tensor_array_2
[
19
,
11
])
def
test_int_lod_tensor
(
self
):
places
=
[
core
.
CPUPlace
(),
core
.
GPUPlace
(
0
)]
for
place
in
places
:
scope
=
core
.
Scope
()
var
=
scope
.
new_var
(
"test_tensor"
)
var_lod
=
scope
.
new_var
(
"test_lod_tensor"
)
tensor
=
var
.
get_tensor
()
lod_tensor
=
var_lod
.
get_lod_tensor
()
tensor
.
set_dims
([
4
,
4
,
6
])
tensor
.
alloc_int
(
place
)
array
=
numpy
.
array
(
tensor
)
array
[
0
,
0
,
0
]
=
3
array
[
3
,
3
,
5
]
=
10
tensor
.
set
(
array
,
place
)
lod_tensor
.
set_tensor
(
tensor
)
lod_tensor
.
set_lod
([[
0
,
2
,
4
]])
lod_v
=
numpy
.
array
(
lod_tensor
.
tensor
())
self
.
assertTrue
(
numpy
.
alltrue
(
array
==
lod_v
))
lod
=
lod_tensor
.
lod
()
self
.
assertEqual
(
0
,
lod
[
0
][
0
])
self
.
assertEqual
(
2
,
lod
[
0
][
1
])
self
.
assertEqual
(
4
,
lod
[
0
][
2
])
def
test_float_lod_tensor
(
self
):
places
=
[
core
.
CPUPlace
(),
core
.
GPUPlace
(
0
)]
for
place
in
places
:
scope
=
core
.
Scope
()
var
=
scope
.
new_var
(
"test_tensor"
)
var_lod
=
scope
.
new_var
(
"test_lod_tensor"
)
tensor
=
var
.
get_tensor
()
lod_tensor
=
var_lod
.
get_lod_tensor
()
tensor
.
set_dims
([
5
,
2
,
3
,
4
])
tensor
.
alloc_float
(
place
)
tensor_array
=
numpy
.
array
(
tensor
)
self
.
assertEqual
((
5
,
2
,
3
,
4
),
tensor_array
.
shape
)
tensor_array
[
0
,
0
,
0
,
0
]
=
1.0
tensor_array
[
0
,
0
,
0
,
1
]
=
2.0
tensor
.
set
(
tensor_array
,
place
)
lod_tensor
.
set_tensor
(
tensor
)
lod_v
=
numpy
.
array
(
lod_tensor
.
tensor
())
self
.
assertAlmostEqual
(
1.0
,
lod_v
[
0
,
0
,
0
,
0
])
self
.
assertAlmostEqual
(
2.0
,
lod_v
[
0
,
0
,
0
,
1
])
self
.
assertEqual
(
len
(
lod_tensor
.
lod
()),
0
)
lod_py
=
[[
0
,
2
,
5
],
[
0
,
2
,
4
,
5
]]
lod_tensor
.
set_lod
(
lod_py
)
lod
=
lod_tensor
.
lod
()
self
.
assertListEqual
(
lod_py
,
lod
)
def
test_lod_tensor_init
(
self
):
scope
=
core
.
Scope
()
var
=
scope
.
new_var
(
"test_tensor"
)
place
=
core
.
CPUPlace
()
tensor
=
var
.
get_tensor
()
tensor
.
set_dims
([
5
,
2
,
3
,
4
])
tensor
.
alloc_float
(
place
)
tensor_array
=
numpy
.
array
(
tensor
)
tensor_array
[
0
,
0
,
0
,
0
]
=
1.0
tensor_array
[
0
,
0
,
0
,
1
]
=
2.0
tensor
.
set
(
tensor_array
,
place
)
lod_py
=
[[
0
,
2
,
5
],
[
0
,
2
,
4
,
5
]]
lod_tensor
=
core
.
LoDTensor
(
lod_py
,
tensor
)
lod_v
=
numpy
.
array
(
lod_tensor
.
tensor
())
self
.
assertAlmostEqual
(
1.0
,
lod_v
[
0
,
0
,
0
,
0
])
self
.
assertAlmostEqual
(
2.0
,
lod_v
[
0
,
0
,
0
,
1
])
self
.
assertListEqual
(
lod_py
,
lod_tensor
.
lod
())
if
__name__
==
'__main__'
:
if
__name__
==
'__main__'
:
unittest
.
main
()
unittest
.
main
()
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录