Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
Paddle-Lite
提交
aef98218
P
Paddle-Lite
项目概览
PaddlePaddle
/
Paddle-Lite
通知
337
Star
4
Fork
1
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
271
列表
看板
标记
里程碑
合并请求
78
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle-Lite
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
271
Issue
271
列表
看板
标记
里程碑
合并请求
78
合并请求
78
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
aef98218
编写于
5月 17, 2018
作者:
朔-望
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
update clang-format & update code stylee
上级
333ff13f
变更
70
展开全部
隐藏空白更改
内联
并排
Showing
70 changed file
with
18839 addition
and
16728 deletion
+18839
-16728
.clang-format
.clang-format
+7
-0
.pre-commit-config.yaml
.pre-commit-config.yaml
+12
-1
src/common/type_define.h
src/common/type_define.h
+27
-26
src/common/types.h
src/common/types.h
+37
-37
src/common/variant.h
src/common/variant.h
+63
-63
src/framework/attribute.cpp
src/framework/attribute.cpp
+1
-1
src/framework/attribute.h
src/framework/attribute.h
+96
-93
src/framework/block_desc.cpp
src/framework/block_desc.cpp
+24
-24
src/framework/block_desc.h
src/framework/block_desc.h
+33
-29
src/framework/data_layout.h
src/framework/data_layout.h
+40
-39
src/framework/data_transform.cpp
src/framework/data_transform.cpp
+63
-58
src/framework/data_transform.h
src/framework/data_transform.h
+7
-7
src/framework/data_type.h
src/framework/data_type.h
+18
-18
src/framework/ddim.cc
src/framework/ddim.cc
+316
-307
src/framework/ddim.h
src/framework/ddim.h
+135
-130
src/framework/dim.h
src/framework/dim.h
+368
-346
src/framework/executor.cpp
src/framework/executor.cpp
+77
-68
src/framework/executor.h
src/framework/executor.h
+15
-15
src/framework/framework.pb.cpp
src/framework/framework.pb.cpp
+8484
-7699
src/framework/framework.pb.h
src/framework/framework.pb.h
+5426
-4802
src/framework/lod_tensor.cc
src/framework/lod_tensor.cc
+298
-274
src/framework/lod_tensor.h
src/framework/lod_tensor.h
+186
-174
src/framework/op_desc.cpp
src/framework/op_desc.cpp
+54
-51
src/framework/op_desc.h
src/framework/op_desc.h
+20
-18
src/framework/op_info.h
src/framework/op_info.h
+70
-66
src/framework/op_kernel_type.h
src/framework/op_kernel_type.h
+38
-30
src/framework/op_proto_maker.h
src/framework/op_proto_maker.h
+4
-4
src/framework/operator.cpp
src/framework/operator.cpp
+17
-17
src/framework/operator.h
src/framework/operator.h
+44
-40
src/framework/paddle_mobile_object.h
src/framework/paddle_mobile_object.h
+9
-9
src/framework/program.cpp
src/framework/program.cpp
+1
-1
src/framework/program.h
src/framework/program.h
+10
-10
src/framework/program_desc.cpp
src/framework/program_desc.cpp
+11
-11
src/framework/program_desc.h
src/framework/program_desc.h
+13
-11
src/framework/scope.cc
src/framework/scope.cc
+100
-97
src/framework/scope.h
src/framework/scope.h
+38
-37
src/framework/selected_rows.h
src/framework/selected_rows.h
+42
-40
src/framework/tensor.h
src/framework/tensor.h
+312
-283
src/framework/tensor_util.cc
src/framework/tensor_util.cc
+185
-179
src/framework/tensor_util.h
src/framework/tensor_util.h
+31
-31
src/framework/var_desc.cpp
src/framework/var_desc.cpp
+3
-3
src/framework/var_desc.h
src/framework/var_desc.h
+53
-52
src/framework/var_type.h
src/framework/var_type.h
+12
-11
src/framework/variable.h
src/framework/variable.h
+70
-68
src/io.cpp
src/io.cpp
+350
-304
src/io.h
src/io.h
+8
-7
src/memory/t_malloc.cc
src/memory/t_malloc.cc
+22
-22
src/memory/t_malloc.h
src/memory/t_malloc.h
+32
-32
src/operators/conv_op.cpp
src/operators/conv_op.cpp
+33
-33
src/operators/conv_op.h
src/operators/conv_op.h
+28
-27
src/operators/kernel/arm/conv_kernel.cpp
src/operators/kernel/arm/conv_kernel.cpp
+146
-131
src/operators/kernel/conv_kernel.h
src/operators/kernel/conv_kernel.h
+9
-8
src/operators/kernel/fpga/conv_kernel.cpp
src/operators/kernel/fpga/conv_kernel.cpp
+7
-6
src/operators/math/im2col.cc
src/operators/math/im2col.cc
+319
-245
src/operators/math/im2col.h
src/operators/math/im2col.h
+88
-75
src/operators/math/math_function.cc
src/operators/math/math_function.cc
+121
-102
src/operators/math/math_function.h
src/operators/math/math_function.h
+22
-20
src/operators/math/vol2col.cc
src/operators/math/vol2col.cc
+208
-175
src/operators/math/vol2col.h
src/operators/math/vol2col.h
+71
-59
src/operators/op_param.cpp
src/operators/op_param.cpp
+20
-19
src/operators/op_param.h
src/operators/op_param.h
+88
-82
src/platform/data_type.h
src/platform/data_type.h
+97
-96
src/platform/macros.h
src/platform/macros.h
+5
-5
tools/android-cmake/android.toolchain.cmake
tools/android-cmake/android.toolchain.cmake
+0
-0
tools/ios-cmake/ios.toolchain.cmake
tools/ios-cmake/ios.toolchain.cmake
+0
-0
tools/pre-commit.hooks/.clang_format.hook
tools/pre-commit.hooks/.clang_format.hook
+0
-0
tools/pre-commit.hooks/.copyright.hook
tools/pre-commit.hooks/.copyright.hook
+124
-0
tools/pre-commit.hooks/clang-format.bash
tools/pre-commit.hooks/clang-format.bash
+15
-0
tools/pre-commit.hooks/copyright.py
tools/pre-commit.hooks/copyright.py
+143
-0
tools/pre-commit.hooks/cpplint.bash
tools/pre-commit.hooks/cpplint.bash
+13
-0
未找到文件。
.clang-format
0 → 100644
浏览文件 @
aef98218
---
Language: Cpp
BasedOnStyle: LLVM
Standard: Cpp11
IndentWidth: 4
NamespaceIndentation: All
...
.pre-commit-config.yaml
浏览文件 @
aef98218
...
@@ -6,6 +6,7 @@ repos:
...
@@ -6,6 +6,7 @@ repos:
files
:
(src).*\.(md|py|mm|swift|java|c|cc|cxx|cpp|cu|h|hpp|hxx)$
files
:
(src).*\.(md|py|mm|swift|java|c|cc|cxx|cpp|cu|h|hpp|hxx)$
-
id
:
remove-tabs
-
id
:
remove-tabs
files
:
(src).*\.(md|py|mm|swift|java|c|cc|cxx|cpp|cu|h|hpp|hxx)$
files
:
(src).*\.(md|py|mm|swift|java|c|cc|cxx|cpp|cu|h|hpp|hxx)$
-
repo
:
https://github.com/pre-commit/pre-commit-hooks
-
repo
:
https://github.com/pre-commit/pre-commit-hooks
sha
:
5bf6c09bfa1297d3692cadd621ef95f1284e33c0
sha
:
5bf6c09bfa1297d3692cadd621ef95f1284e33c0
hooks
:
hooks
:
...
@@ -18,11 +19,21 @@ repos:
...
@@ -18,11 +19,21 @@ repos:
files
:
(src).*\.(md|py|mm|swift|java|c|cc|cxx|cpp|cu|h|hpp|hxx)$
files
:
(src).*\.(md|py|mm|swift|java|c|cc|cxx|cpp|cu|h|hpp|hxx)$
-
id
:
trailing-whitespace
-
id
:
trailing-whitespace
files
:
(src).*\.(md|py|mm|swift|java|c|cc|cxx|cpp|cu|h|hpp|hxx)$
files
:
(src).*\.(md|py|mm|swift|java|c|cc|cxx|cpp|cu|h|hpp|hxx)$
-
repo
:
local
-
repo
:
local
hooks
:
hooks
:
-
id
:
clang-format-with-version-check
-
id
:
clang-format-with-version-check
name
:
clang-format
name
:
clang-format
description
:
Format files with ClangFormat.
description
:
Format files with ClangFormat.
entry
:
bash .clang_format.hook -i
entry
:
bash .
/tools/pre-commit.hooks/.
clang_format.hook -i
language
:
system
language
:
system
files
:
(src).*\.(c|cc|cxx|cpp|h|hpp|hxx)$
files
:
(src).*\.(c|cc|cxx|cpp|h|hpp|hxx)$
#- repo: local
# hooks:
# - id: copyright_checker
# name: copyright_checker
# entry: python ./tools/pre-commit.hooks/.copyright.hook
# language: system
# files: (src).*\.(c|cc|cxx|cpp|cu|h|hpp|hxx|proto|py)$
# exclude: (?!.*third_party)^.*$ | (?!.*book)^.*$
src/common/type_define.h
浏览文件 @
aef98218
...
@@ -23,30 +23,31 @@ SOFTWARE.
...
@@ -23,30 +23,31 @@ SOFTWARE.
namespace
paddle_mobile
{
namespace
paddle_mobile
{
namespace
framework
{
namespace
framework
{
template
<
typename
Dtype
>
class
OperatorBase
;
template
<
typename
Dtype
>
class
OperatorBase
;
class
OpDesc
;
class
OpDesc
;
class
BlockDesc
;
class
BlockDesc
;
class
InferShapeContext
;
class
InferShapeContext
;
}
}
using
VariableNameMap
=
std
::
map
<
std
::
string
,
std
::
vector
<
std
::
string
>>
;
using
VariableNameMap
=
std
::
map
<
std
::
string
,
std
::
vector
<
std
::
string
>>
;
template
<
typename
Dtype
>
template
<
typename
Dtype
>
using
OpCreator
=
std
::
function
<
framework
::
OperatorBase
<
Dtype
>
*
(
using
OpCreator
=
std
::
function
<
framework
::
OperatorBase
<
Dtype
>
*
(
const
std
::
string
&
/*type*/
,
const
VariableNameMap
&
/*inputs*/
,
const
std
::
string
&
/*type*/
,
const
VariableNameMap
&
/*inputs*/
,
const
VariableNameMap
&
/*outputs*/
,
const
VariableNameMap
&
/*outputs*/
,
const
framework
::
AttributeMap
&
/*attrs*/
)
>
;
const
framework
::
AttributeMap
&
/*attrs*/
)
>
;
using
GradOpMakerFN
=
using
GradOpMakerFN
=
std
::
function
<
std
::
vector
<
std
::
unique_ptr
<
framework
::
OpDesc
>>
(
std
::
function
<
std
::
vector
<
std
::
unique_ptr
<
framework
::
OpDesc
>>
(
const
framework
::
OpDesc
&
,
const
framework
::
OpDesc
&
,
const
std
::
unordered_set
<
std
::
string
>
&
/*no_grad_set*/
,
const
std
::
unordered_set
<
std
::
string
>
&
/*no_grad_set*/
,
std
::
unordered_map
<
std
::
string
,
std
::
string
>
*
/*grad_to_var*/
,
std
::
unordered_map
<
std
::
string
,
std
::
string
>
*
/*grad_to_var*/
,
const
std
::
vector
<
framework
::
BlockDesc
*>
&
grad_block
)
>
;
const
std
::
vector
<
framework
::
BlockDesc
*>
&
grad_block
)
>
;
using
InferVarTypeFN
=
std
::
function
<
void
(
const
framework
::
OpDesc
&
/*op_desc*/
,
using
InferVarTypeFN
=
framework
::
BlockDesc
*
/*block*/
)
>
;
std
::
function
<
void
(
const
framework
::
OpDesc
&
/*op_desc*/
,
framework
::
BlockDesc
*
/*block*/
)
>
;
using
InferShapeFN
=
std
::
function
<
void
(
framework
::
InferShapeContext
*
)
>
;
using
InferShapeFN
=
std
::
function
<
void
(
framework
::
InferShapeContext
*
)
>
;
};
};
src/common/types.h
浏览文件 @
aef98218
...
@@ -19,45 +19,45 @@ SOFTWARE.
...
@@ -19,45 +19,45 @@ SOFTWARE.
#pragma once;
#pragma once;
namespace
paddle_mobile
{
namespace
paddle_mobile
{
enum
class
Precision
:
int
{
FP32
=
0
};
enum
class
Precision
:
int
{
FP32
=
0
};
//! device type
//! device type
enum
DeviceTypeEnum
{
kINVALID
=
-
1
,
kCPU
=
0
,
kFPGA
=
1
,
kGPU_MALI
=
2
};
enum
DeviceTypeEnum
{
kINVALID
=
-
1
,
kCPU
=
0
,
kFPGA
=
1
,
kGPU_MALI
=
2
};
template
<
DeviceTypeEnum
T
>
struct
DeviceType
{};
template
<
DeviceTypeEnum
T
>
struct
DeviceType
{};
typedef
DeviceType
<
kCPU
>
CPU
;
typedef
DeviceType
<
kCPU
>
CPU
;
typedef
DeviceType
<
kFPGA
>
FPGA
;
typedef
DeviceType
<
kFPGA
>
FPGA
;
typedef
DeviceType
<
kGPU_MALI
>
GPU_MALI
;
typedef
DeviceType
<
kGPU_MALI
>
GPU_MALI
;
//! data type
//! data type
enum
DataType
{
enum
DataType
{
PM_INVALID
=
-
1
,
PM_INVALID
=
-
1
,
PM_HALF
=
0
,
PM_HALF
=
0
,
PM_FLOAT
=
1
,
PM_FLOAT
=
1
,
PM_DOUBLE
=
2
,
PM_DOUBLE
=
2
,
PM_INT8
=
3
,
PM_INT8
=
3
,
PM_INT16
=
4
,
PM_INT16
=
4
,
PM_INT32
=
5
,
PM_INT32
=
5
,
PM_INT64
=
6
,
PM_INT64
=
6
,
PM_UINT8
=
7
,
PM_UINT8
=
7
,
PM_UINT16
=
8
,
PM_UINT16
=
8
,
PM_UINT32
=
9
,
PM_UINT32
=
9
,
PM_STRING
=
10
,
PM_STRING
=
10
,
PM_BOOL
=
11
,
PM_BOOL
=
11
,
PM_SHAPE
=
12
,
PM_SHAPE
=
12
,
PM_TENSOR
=
13
PM_TENSOR
=
13
};
};
//!
//!
enum
PMStatus
{
enum
PMStatus
{
PMSuccess
=
0xFF
,
/*!< No errors */
PMSuccess
=
0xFF
,
/*!< No errors */
PMNotInitialized
=
0x01
,
/*!< Data not initialized. */
PMNotInitialized
=
0x01
,
/*!< Data not initialized. */
PMInvalidValue
=
0x02
,
/*!< Incorrect variable value. */
PMInvalidValue
=
0x02
,
/*!< Incorrect variable value. */
PMMemAllocFailed
=
0x03
,
/*!< Memory allocation error. */
PMMemAllocFailed
=
0x03
,
/*!< Memory allocation error. */
PMUnKownError
=
0x04
,
/*!< Unknown error. */
PMUnKownError
=
0x04
,
/*!< Unknown error. */
PMOutOfAuthority
=
0x05
,
/*!< Try to modified data not your own*/
PMOutOfAuthority
=
0x05
,
/*!< Try to modified data not your own*/
PMOutOfMem
=
0x06
,
/*!< OOM error*/
PMOutOfMem
=
0x06
,
/*!< OOM error*/
PMUnImplError
=
0x07
,
/*!< Unimplement error. */
PMUnImplError
=
0x07
,
/*!< Unimplement error. */
PMWrongDevice
=
0x08
/*!< un-correct device. */
PMWrongDevice
=
0x08
/*!< un-correct device. */
};
};
}
}
src/common/variant.h
浏览文件 @
aef98218
...
@@ -21,79 +21,79 @@ SOFTWARE.
...
@@ -21,79 +21,79 @@ SOFTWARE.
#pragma once
#pragma once
namespace
paddle_mobile
{
namespace
paddle_mobile
{
template
<
int
ID
,
typename
Type
>
struct
IDToType
{
typedef
Type
type_t
;
};
template
<
int
ID
,
typename
Type
>
struct
IDToType
{
typedef
Type
type_t
;
};
template
<
typename
F
,
typename
...
Ts
>
struct
VariantHelper
{
template
<
typename
F
,
typename
...
Ts
>
struct
VariantHelper
{
static
const
size_t
size
=
sizeof
(
F
)
>
VariantHelper
<
Ts
...
>::
size
static
const
size_t
size
=
sizeof
(
F
)
>
VariantHelper
<
Ts
...
>::
size
?
sizeof
(
F
)
?
sizeof
(
F
)
:
VariantHelper
<
Ts
...
>::
size
;
:
VariantHelper
<
Ts
...
>::
size
;
inline
static
void
Destroy
(
size_t
id
,
void
*
data
)
{
inline
static
void
Destroy
(
size_t
id
,
void
*
data
)
{
if
(
id
==
typeid
(
F
).
hash_code
())
{
if
(
id
==
typeid
(
F
).
hash_code
())
{
reinterpret_cast
<
F
*>
(
data
)
->~
F
();
reinterpret_cast
<
F
*>
(
data
)
->~
F
();
}
else
{
}
else
{
VariantHelper
<
Ts
...
>::
Destroy
(
id
,
data
);
VariantHelper
<
Ts
...
>::
Destroy
(
id
,
data
);
}
}
}
}
};
};
template
<
typename
F
>
struct
VariantHelper
<
F
>
{
template
<
typename
F
>
struct
VariantHelper
<
F
>
{
static
const
size_t
size
=
sizeof
(
F
);
static
const
size_t
size
=
sizeof
(
F
);
inline
static
void
Destroy
(
size_t
id
,
void
*
data
)
{
inline
static
void
Destroy
(
size_t
id
,
void
*
data
)
{
if
(
id
==
typeid
(
F
).
hash_code
())
{
if
(
id
==
typeid
(
F
).
hash_code
())
{
// reinterpret_cast<F*>(data)->~F();
// reinterpret_cast<F*>(data)->~F();
}
else
{
}
else
{
// std::cout << "未匹配到 " << std::endl;
// std::cout << "未匹配到 " << std::endl;
}
}
}
}
};
};
template
<
size_t
size
>
class
RawData
{
template
<
size_t
size
>
class
RawData
{
public:
public:
char
data
[
size
];
char
data
[
size
];
RawData
()
{}
RawData
()
{}
RawData
(
const
RawData
&
raw_data
)
{
strcpy
(
data
,
raw_data
.
data
);
}
RawData
(
const
RawData
&
raw_data
)
{
strcpy
(
data
,
raw_data
.
data
);
}
// void operator=(const RawData &raw_data){
// void operator=(const RawData &raw_data){
// strcpy(data, raw_data.data);
// strcpy(data, raw_data.data);
// }
// }
};
};
template
<
typename
...
Ts
>
struct
Variant
{
template
<
typename
...
Ts
>
struct
Variant
{
Variant
(
const
Variant
&
variant
)
{
Variant
(
const
Variant
&
variant
)
{
// std::cout << " 赋值构造函数 " << std::endl;
// std::cout << " 赋值构造函数 " << std::endl;
type_id
=
variant
.
type_id
;
type_id
=
variant
.
type_id
;
data
=
variant
.
data
;
data
=
variant
.
data
;
}
}
Variant
()
:
type_id
(
invalid_type
())
{}
Variant
()
:
type_id
(
invalid_type
())
{}
~
Variant
()
{
~
Variant
()
{
// helper::Destroy(type_id, &data);
// helper::Destroy(type_id, &data);
}
}
template
<
typename
T
,
typename
...
Args
>
void
Set
(
Args
&&
...
args
)
{
template
<
typename
T
,
typename
...
Args
>
void
Set
(
Args
&&
...
args
)
{
helper
::
Destroy
(
type_id
,
&
data
);
helper
::
Destroy
(
type_id
,
&
data
);
new
(
&
data
)
T
(
std
::
forward
<
Args
>
(
args
)...);
new
(
&
data
)
T
(
std
::
forward
<
Args
>
(
args
)...);
type_id
=
typeid
(
T
).
hash_code
();
type_id
=
typeid
(
T
).
hash_code
();
}
}
template
<
typename
T
>
T
&
Get
()
const
{
template
<
typename
T
>
T
&
Get
()
const
{
if
(
type_id
==
typeid
(
T
).
hash_code
())
{
if
(
type_id
==
typeid
(
T
).
hash_code
())
{
return
*
const_cast
<
T
*>
(
reinterpret_cast
<
const
T
*>
(
&
data
));
return
*
const_cast
<
T
*>
(
reinterpret_cast
<
const
T
*>
(
&
data
));
}
else
{
}
else
{
// std::cout << " bad cast in variant " << std::endl;
// std::cout << " bad cast in variant " << std::endl;
throw
std
::
bad_cast
();
throw
std
::
bad_cast
();
}
}
}
}
size_t
TypeId
()
const
{
return
type_id
;
}
size_t
TypeId
()
const
{
return
type_id
;
}
private:
private:
static
inline
size_t
invalid_type
()
{
return
typeid
(
void
).
hash_code
();
}
static
inline
size_t
invalid_type
()
{
return
typeid
(
void
).
hash_code
();
}
typedef
VariantHelper
<
Ts
...
>
helper
;
typedef
VariantHelper
<
Ts
...
>
helper
;
size_t
type_id
;
size_t
type_id
;
RawData
<
helper
::
size
>
data
;
RawData
<
helper
::
size
>
data
;
};
};
template
<
typename
T
>
struct
Vistor
{
typedef
T
type_t
;
};
template
<
typename
T
>
struct
Vistor
{
typedef
T
type_t
;
};
}
// namespace paddle_mobile
}
// namespace paddle_mobile
src/framework/attribute.cpp
浏览文件 @
aef98218
...
@@ -19,5 +19,5 @@ SOFTWARE.
...
@@ -19,5 +19,5 @@ SOFTWARE.
#include "attribute.h"
#include "attribute.h"
namespace
paddle_mobile
{
namespace
paddle_mobile
{
namespace
framework
{}
namespace
framework
{}
}
// namespace paddle_mobile
}
// namespace paddle_mobile
src/framework/attribute.h
浏览文件 @
aef98218
...
@@ -22,107 +22,110 @@ SOFTWARE.
...
@@ -22,107 +22,110 @@ SOFTWARE.
#include "framework.pb.h"
#include "framework.pb.h"
namespace
paddle_mobile
{
namespace
paddle_mobile
{
namespace
framework
{
namespace
framework
{
class
BlockDesc
;
class
BlockDesc
;
class
Attribute
{
class
Attribute
{
public:
public:
static
Attribute
GetAttrValue
(
const
proto
::
OpDesc
::
Attr
&
attr_desc
)
{
static
Attribute
// std::cout << "begin get attr value" << std::endl;
GetAttrValue
(
const
proto
::
OpDesc
::
Attr
&
attr_desc
)
{
Attribute
attr
;
// std::cout << "begin get attr value" << std::endl;
switch
(
attr_desc
.
type
())
{
Attribute
attr
;
case
proto
::
AttrType
::
BOOLEAN
:
{
switch
(
attr_desc
.
type
())
{
attr
.
Set
<
bool
>
(
attr_desc
.
b
());
case
proto
::
AttrType
::
BOOLEAN
:
{
break
;
attr
.
Set
<
bool
>
(
attr_desc
.
b
());
}
break
;
case
proto
::
AttrType
::
INT
:
{
}
attr
.
Set
<
int
>
(
attr_desc
.
i
());
case
proto
::
AttrType
::
INT
:
{
break
;
attr
.
Set
<
int
>
(
attr_desc
.
i
());
}
break
;
case
proto
::
AttrType
::
FLOAT
:
{
}
attr
.
Set
<
float
>
(
attr_desc
.
f
());
case
proto
::
AttrType
::
FLOAT
:
{
break
;
attr
.
Set
<
float
>
(
attr_desc
.
f
());
}
break
;
case
proto
::
AttrType
::
STRING
:
{
}
attr
.
Set
<
std
::
string
>
(
attr_desc
.
s
());
case
proto
::
AttrType
::
STRING
:
{
break
;
attr
.
Set
<
std
::
string
>
(
attr_desc
.
s
());
}
break
;
case
proto
::
AttrType
::
BOOLEANS
:
{
}
std
::
vector
<
bool
>
val
(
attr_desc
.
bools_size
());
case
proto
::
AttrType
::
BOOLEANS
:
{
for
(
int
i
=
0
;
i
<
attr_desc
.
bools_size
();
++
i
)
{
std
::
vector
<
bool
>
val
(
attr_desc
.
bools_size
());
val
[
i
]
=
attr_desc
.
bools
(
i
);
for
(
int
i
=
0
;
i
<
attr_desc
.
bools_size
();
++
i
)
{
}
val
[
i
]
=
attr_desc
.
bools
(
i
);
attr
.
Set
<
std
::
vector
<
bool
>>
(
val
);
}
break
;
attr
.
Set
<
std
::
vector
<
bool
>>
(
val
);
}
break
;
case
proto
::
AttrType
::
INTS
:
{
}
std
::
vector
<
int
>
val
(
attr_desc
.
ints_size
());
case
proto
::
AttrType
::
INTS
:
{
for
(
int
i
=
0
;
i
<
attr_desc
.
ints_size
();
++
i
)
{
std
::
vector
<
int
>
val
(
attr_desc
.
ints_size
());
val
[
i
]
=
attr_desc
.
ints
(
i
);
for
(
int
i
=
0
;
i
<
attr_desc
.
ints_size
();
++
i
)
{
}
val
[
i
]
=
attr_desc
.
ints
(
i
);
attr
.
Set
<
std
::
vector
<
int
>>
(
val
);
}
break
;
attr
.
Set
<
std
::
vector
<
int
>>
(
val
);
}
break
;
case
proto
::
AttrType
::
FLOATS
:
{
}
std
::
vector
<
float
>
val
(
attr_desc
.
floats_size
());
case
proto
::
AttrType
::
FLOATS
:
{
for
(
int
i
=
0
;
i
<
attr_desc
.
floats_size
();
++
i
)
{
std
::
vector
<
float
>
val
(
attr_desc
.
floats_size
());
val
[
i
]
=
attr_desc
.
floats
(
i
);
for
(
int
i
=
0
;
i
<
attr_desc
.
floats_size
();
++
i
)
{
}
val
[
i
]
=
attr_desc
.
floats
(
i
);
attr
.
Set
<
std
::
vector
<
float
>>
(
val
);
}
break
;
attr
.
Set
<
std
::
vector
<
float
>>
(
val
);
}
break
;
case
proto
::
AttrType
::
STRINGS
:
{
}
std
::
vector
<
std
::
string
>
val
(
attr_desc
.
strings_size
());
case
proto
::
AttrType
::
STRINGS
:
{
for
(
int
i
=
0
;
i
<
attr_desc
.
strings_size
();
++
i
)
{
std
::
vector
<
std
::
string
>
val
(
attr_desc
.
strings_size
());
val
[
i
]
=
attr_desc
.
strings
(
i
);
for
(
int
i
=
0
;
i
<
attr_desc
.
strings_size
();
++
i
)
{
}
val
[
i
]
=
attr_desc
.
strings
(
i
);
attr
.
Set
<
std
::
vector
<
std
::
string
>>
(
val
);
}
break
;
attr
.
Set
<
std
::
vector
<
std
::
string
>>
(
val
);
}
break
;
case
proto
::
AttrType
::
LONG
:
{
}
attr
.
Set
<
int64_t
>
(
attr_desc
.
l
());
case
proto
::
AttrType
::
LONG
:
{
break
;
attr
.
Set
<
int64_t
>
(
attr_desc
.
l
());
}
break
;
default:
}
// std::cout << " not support " << std::endl;
default:
break
;
// std::cout << " not support " << std::endl;
}
break
;
// std::cout << "end get attr value" << std::endl;
}
return
attr
;
// std::cout << "end get attr value" << std::endl;
}
return
attr
;
}
Attribute
()
{}
Attribute
()
{}
template
<
typename
T
,
typename
...
Args
>
Attribute
&
Set
(
Args
&&
...
args
)
{
template
<
typename
T
,
typename
...
Args
>
variant_
.
Set
<
T
>
(
args
...);
Attribute
&
Set
(
Args
&&
...
args
)
{
return
*
this
;
variant_
.
Set
<
T
>
(
args
...);
}
return
*
this
;
}
template
<
typename
T
>
T
&
Get
()
const
{
return
variant_
.
Get
<
T
>
();
}
template
<
typename
T
>
T
&
Get
()
const
{
return
variant_
.
Get
<
T
>
();
}
private:
private:
Variant
<
int
,
float
,
std
::
string
,
std
::
vector
<
int
>
,
std
::
vector
<
floa
t
>
,
Variant
<
int
,
float
,
std
::
string
,
std
::
vector
<
in
t
>
,
std
::
vector
<
std
::
string
>
,
bool
,
std
::
vector
<
bool
>
,
BlockDesc
*
,
std
::
vector
<
float
>
,
std
::
vector
<
std
::
string
>
,
bool
,
int64_t
>
std
::
vector
<
bool
>
,
BlockDesc
*
,
int64_t
>
variant_
;
variant_
;
};
};
using
AttributeMap
=
std
::
unordered_map
<
std
::
string
,
Attribute
>
;
using
AttributeMap
=
std
::
unordered_map
<
std
::
string
,
Attribute
>
;
class
AttrReader
{
class
AttrReader
{
public:
public:
explicit
AttrReader
(
const
AttributeMap
&
attrs
)
:
attrs_
(
attrs
)
{}
explicit
AttrReader
(
const
AttributeMap
&
attrs
)
:
attrs_
(
attrs
)
{}
template
<
typename
T
>
inline
T
Get
(
const
std
::
string
&
name
)
const
{
template
<
typename
T
>
inline
T
Get
(
const
std
::
string
&
name
)
const
{
// PADDLE_ENFORCE(attrs_.count(name) != 0, "%s should be in
// PADDLE_ENFORCE(attrs_.count(name) != 0, "%s should
// AttributeMap",
// be in
// name);
// AttributeMap",
return
((
Attribute
)
attrs_
.
at
(
name
)).
Get
<
T
>
();
// name);
}
return
((
Attribute
)
attrs_
.
at
(
name
)).
Get
<
T
>
();
}
private:
private:
const
AttributeMap
&
attrs_
;
const
AttributeMap
&
attrs_
;
};
};
}
// namespace framework
}
// namespace framework
}
// namespace paddle_mobile
}
// namespace paddle_mobile
src/framework/block_desc.cpp
浏览文件 @
aef98218
...
@@ -19,32 +19,32 @@ SOFTWARE.
...
@@ -19,32 +19,32 @@ SOFTWARE.
#include "block_desc.h"
#include "block_desc.h"
namespace
paddle_mobile
{
namespace
paddle_mobile
{
namespace
framework
{
namespace
framework
{
std
::
vector
<
std
::
shared_ptr
<
VarDesc
>>
BlockDesc
::
Vars
()
const
{
std
::
vector
<
std
::
shared_ptr
<
VarDesc
>>
BlockDesc
::
Vars
()
const
{
std
::
vector
<
std
::
shared_ptr
<
VarDesc
>>
res
;
std
::
vector
<
std
::
shared_ptr
<
VarDesc
>>
res
;
for
(
const
auto
&
p
:
vars_
)
{
for
(
const
auto
&
p
:
vars_
)
{
res
.
push_back
(
p
.
second
);
res
.
push_back
(
p
.
second
);
}
}
return
res
;
return
res
;
}
}
std
::
vector
<
std
::
shared_ptr
<
OpDesc
>>
BlockDesc
::
Ops
()
const
{
std
::
vector
<
std
::
shared_ptr
<
OpDesc
>>
BlockDesc
::
Ops
()
const
{
std
::
vector
<
std
::
shared_ptr
<
OpDesc
>>
res
;
std
::
vector
<
std
::
shared_ptr
<
OpDesc
>>
res
;
for
(
const
auto
&
op
:
ops_
)
{
for
(
const
auto
&
op
:
ops_
)
{
res
.
push_back
(
op
);
res
.
push_back
(
op
);
}
}
return
res
;
return
res
;
}
}
BlockDesc
::
BlockDesc
(
const
proto
::
BlockDesc
&
desc
)
:
desc_
(
desc
)
{
BlockDesc
::
BlockDesc
(
const
proto
::
BlockDesc
&
desc
)
:
desc_
(
desc
)
{
for
(
const
proto
::
VarDesc
&
var_desc
:
desc_
.
vars
())
{
for
(
const
proto
::
VarDesc
&
var_desc
:
desc_
.
vars
())
{
vars_
[
var_desc
.
name
()].
reset
(
new
VarDesc
(
var_desc
));
vars_
[
var_desc
.
name
()].
reset
(
new
VarDesc
(
var_desc
));
}
}
for
(
const
proto
::
OpDesc
&
op_desc
:
desc_
.
ops
())
{
for
(
const
proto
::
OpDesc
&
op_desc
:
desc_
.
ops
())
{
ops_
.
emplace_back
(
new
framework
::
OpDesc
(
op_desc
));
ops_
.
emplace_back
(
new
framework
::
OpDesc
(
op_desc
));
}
}
}
}
}
// namespace framework
}
// namespace framework
}
// namespace paddle_mobile
}
// namespace paddle_mobile
src/framework/block_desc.h
浏览文件 @
aef98218
...
@@ -24,46 +24,50 @@ SOFTWARE.
...
@@ -24,46 +24,50 @@ SOFTWARE.
#include "var_desc.h"
#include "var_desc.h"
namespace
paddle_mobile
{
namespace
paddle_mobile
{
namespace
framework
{
namespace
framework
{
class
BlockDesc
:
PaddleMobileObject
{
class
BlockDesc
:
PaddleMobileObject
{
public:
public:
BlockDesc
(
const
proto
::
BlockDesc
&
desc
);
BlockDesc
(
const
proto
::
BlockDesc
&
desc
);
const
int
&
ID
()
const
{
return
desc_
.
idx
();
}
const
int
&
ID
()
const
{
return
desc_
.
idx
();
}
const
int
&
Parent
()
const
{
return
desc_
.
parent_idx
();
}
const
int
&
Parent
()
const
{
return
desc_
.
parent_idx
();
}
bool
operator
==
(
const
paddle_mobile
::
framework
::
BlockDesc
&
in_block
)
const
{
bool
operator
==
(
return
this
->
ID
()
==
in_block
.
ID
()
&&
this
->
Parent
()
==
in_block
.
Parent
();
const
paddle_mobile
::
framework
::
BlockDesc
&
in_block
)
const
{
}
return
this
->
ID
()
==
in_block
.
ID
()
&&
this
->
Parent
()
==
in_block
.
Parent
();
}
bool
operator
<
(
const
paddle_mobile
::
framework
::
BlockDesc
&
in_block
)
const
{
bool
operator
<
(
return
this
->
ID
()
<
in_block
.
ID
()
&&
this
->
Parent
()
<
in_block
.
Parent
();
const
paddle_mobile
::
framework
::
BlockDesc
&
in_block
)
const
{
}
return
this
->
ID
()
<
in_block
.
ID
()
&&
this
->
Parent
()
<
in_block
.
Parent
();
}
std
::
vector
<
std
::
shared_ptr
<
VarDesc
>>
Vars
()
const
;
std
::
vector
<
std
::
shared_ptr
<
VarDesc
>>
Vars
()
const
;
std
::
vector
<
std
::
shared_ptr
<
OpDesc
>>
Ops
()
const
;
std
::
vector
<
std
::
shared_ptr
<
OpDesc
>>
Ops
()
const
;
private:
private:
proto
::
BlockDesc
desc_
;
proto
::
BlockDesc
desc_
;
std
::
vector
<
std
::
shared_ptr
<
OpDesc
>>
ops_
;
std
::
vector
<
std
::
shared_ptr
<
OpDesc
>>
ops_
;
std
::
unordered_map
<
std
::
string
,
std
::
shared_ptr
<
VarDesc
>>
vars_
;
std
::
unordered_map
<
std
::
string
,
std
::
shared_ptr
<
VarDesc
>>
vars_
;
};
};
}
// namespace framework
}
// namespace framework
}
// namespace paddle_mobile
}
// namespace paddle_mobile
namespace
std
{
namespace
std
{
template
<
>
struct
hash
<
paddle_mobile
::
framework
::
BlockDesc
>
{
template
<
>
struct
hash
<
paddle_mobile
::
framework
::
BlockDesc
>
{
typedef
paddle_mobile
::
framework
::
BlockDesc
argument_type
;
typedef
paddle_mobile
::
framework
::
BlockDesc
argument_type
;
typedef
std
::
size_t
result_type
;
typedef
std
::
size_t
result_type
;
result_type
operator
()(
argument_type
const
&
s
)
const
noexcept
{
result_type
operator
()(
argument_type
const
&
s
)
const
noexcept
{
result_type
const
h1
(
std
::
hash
<
int
>
{}(
s
.
ID
()));
result_type
const
h1
(
std
::
hash
<
int
>
{}(
s
.
ID
()));
result_type
const
h2
(
std
::
hash
<
int
>
{}(
s
.
ID
()));
result_type
const
h2
(
std
::
hash
<
int
>
{}(
s
.
ID
()));
return
h1
^
(
h2
<<
1
);
return
h1
^
(
h2
<<
1
);
}
}
};
};
}
// namespace std
}
// namespace std
src/framework/data_layout.h
浏览文件 @
aef98218
...
@@ -19,49 +19,50 @@ limitations under the License. */
...
@@ -19,49 +19,50 @@ limitations under the License. */
#include <string>
#include <string>
namespace
paddle_mobile
{
namespace
paddle_mobile
{
namespace
framework
{
namespace
framework
{
enum
class
DataLayout
{
enum
class
DataLayout
{
kNHWC
=
0
,
kNHWC
=
0
,
kNCHW
=
1
,
kNCHW
=
1
,
kAnyLayout
=
2
,
kAnyLayout
=
2
,
};
};
inline
DataLayout
StringToDataLayout
(
const
std
::
string
&
str
)
{
inline
DataLayout
StringToDataLayout
(
const
std
::
string
&
str
)
{
std
::
string
s
(
str
);
std
::
string
s
(
str
);
for
(
size_t
i
=
0
;
i
<
s
.
size
();
++
i
)
{
for
(
size_t
i
=
0
;
i
<
s
.
size
();
++
i
)
{
s
[
i
]
=
toupper
(
s
[
i
]);
s
[
i
]
=
toupper
(
s
[
i
]);
}
}
if
(
s
==
"NHWC"
)
{
if
(
s
==
"NHWC"
)
{
return
DataLayout
::
kNHWC
;
return
DataLayout
::
kNHWC
;
}
else
if
(
s
==
"NCHW"
)
{
}
else
if
(
s
==
"NCHW"
)
{
return
DataLayout
::
kNCHW
;
return
DataLayout
::
kNCHW
;
}
else
if
(
s
==
"ANYLAYOUT"
)
{
}
else
if
(
s
==
"ANYLAYOUT"
)
{
return
DataLayout
::
kAnyLayout
;
return
DataLayout
::
kAnyLayout
;
}
else
{
}
else
{
// std::cout << "Unknown storage order string: %s", s;
// std::cout << "Unknown storage order string: %s", s;
}
}
}
}
inline
std
::
string
DataLayoutToString
(
const
DataLayout
&
data_layout
)
{
inline
std
::
string
DataLayoutToString
(
const
DataLayout
&
data_layout
)
{
switch
(
data_layout
)
{
switch
(
data_layout
)
{
case
DataLayout
::
kNHWC
:
case
DataLayout
::
kNHWC
:
return
"NHWC"
;
return
"NHWC"
;
case
DataLayout
::
kNCHW
:
case
DataLayout
::
kNCHW
:
return
"NCHW"
;
return
"NCHW"
;
case
DataLayout
::
kAnyLayout
:
case
DataLayout
::
kAnyLayout
:
return
"ANY_LAYOUT"
;
return
"ANY_LAYOUT"
;
default:
default:
break
;
break
;
// std::cout << "unknown DataLayou %d", data_layout;
// std::cout << "unknown DataLayou %d", data_layout;
}
}
}
}
inline
std
::
ostream
&
operator
<<
(
std
::
ostream
&
out
,
const
DataLayout
&
l
)
{
inline
std
::
ostream
&
operator
<<
(
std
::
ostream
&
out
,
out
<<
DataLayoutToString
(
l
);
const
DataLayout
&
l
)
{
return
out
;
out
<<
DataLayoutToString
(
l
);
}
return
out
;
}
}
// namespace framework
}
// namespace framework
}
// namespace paddle_mobile
}
// namespace paddle_mobile
src/framework/data_transform.cpp
浏览文件 @
aef98218
...
@@ -21,67 +21,72 @@ SOFTWARE.
...
@@ -21,67 +21,72 @@ SOFTWARE.
#include "data_transform.h"
#include "data_transform.h"
namespace
paddle_mobile
{
namespace
paddle_mobile
{
namespace
framework
{
namespace
framework
{
static
void
PassTensorData
(
Tensor
*
from
,
Tensor
*
to
)
{
static
void
PassTensorData
(
Tensor
*
from
,
Tensor
*
to
)
{
to
->
ShareDataWith
(
*
from
);
to
->
ShareDataWith
(
*
from
);
*
from
=
Tensor
();
*
from
=
Tensor
();
}
}
void
DataTransform
(
const
OpKernelType
&
expected_kernel_type
,
void
DataTransform
(
const
OpKernelType
&
expected_kernel_type
,
const
OpKernelType
&
kernel_type_for_var
,
const
OpKernelType
&
kernel_type_for_var
,
const
Tensor
&
input_tensor
,
Tensor
*
output_tensor
)
{
const
Tensor
&
input_tensor
,
Tensor
*
output_tensor
)
{
bool
transformed
=
false
;
bool
transformed
=
false
;
Tensor
in
;
Tensor
in
;
in
.
ShareDataWith
(
input_tensor
);
in
.
ShareDataWith
(
input_tensor
);
Tensor
out
;
Tensor
out
;
// // do layout transform
// // do layout transform
// if (NeedTransformLayout(expected_kernel_type.data_layout_,
// if (NeedTransformLayout(expected_kernel_type.data_layout_,
// kernel_type_for_var.data_layout_)) {
// kernel_type_for_var.data_layout_)) {
// TransDataLayout(kernel_type_for_var, expected_kernel_type, in, &out);
// TransDataLayout(kernel_type_for_var, expected_kernel_type, in,
// transformed = true;
// &out);
// PassTensorData(&out, &in);
// transformed = true;
// }
// PassTensorData(&out, &in);
//
// }
// // do data type transform
//
// if (expected_kernel_type.data_type_ != kernel_type_for_var.data_type_) {
// // do data type transform
// TransDataType(kernel_type_for_var, expected_kernel_type, in, &out);
// if (expected_kernel_type.data_type_ !=
// transformed = true;
// kernel_type_for_var.data_type_) {
// PassTensorData(&out, &in);
// TransDataType(kernel_type_for_var, expected_kernel_type, in,
// }
// &out);
//
// transformed = true;
// // do device transform
// PassTensorData(&out, &in);
// if (!platform::is_same_place(kernel_type_for_var.place_,
// }
// expected_kernel_type.place_)) {
//
// TransDataDevice(in, expected_kernel_type.place_, &out);
// // do device transform
// transformed = true;
// if (!platform::is_same_place(kernel_type_for_var.place_,
// PassTensorData(&out, &in);
// expected_kernel_type.place_)) {
// }
// TransDataDevice(in, expected_kernel_type.place_, &out);
//
// transformed = true;
// PADDLE_ENFORCE(transformed, "No transform is applied, please check!");
// PassTensorData(&out, &in);
// get output data
// }
output_tensor
->
ShareDataWith
(
in
);
//
}
// PADDLE_ENFORCE(transformed, "No transform is applied, please
// check!");
// get output data
output_tensor
->
ShareDataWith
(
in
);
}
void
CopyVariableWithTensor
(
const
Variable
&
in_var
,
const
Tensor
&
tensor
,
void
CopyVariableWithTensor
(
const
Variable
&
in_var
,
Variable
&
out_var
)
{
const
Tensor
&
tensor
,
Variable
&
out_var
)
{
// if (in_var.IsType<LoDTensor>()) {
// if (in_var.IsType<LoDTensor>()) {
// auto& in_lod_tensor = in_var.Get<LoDTensor>();
// auto& in_lod_tensor = in_var.Get<LoDTensor>();
// auto* tran_lod_tensor = out_var.GetMutable<LoDTensor>();
// auto* tran_lod_tensor = out_var.GetMutable<LoDTensor>();
// tran_lod_tensor->set_lod(in_lod_tensor.lod());
// tran_lod_tensor->set_lod(in_lod_tensor.lod());
// tran_lod_tensor->set_layout(in_lod_tensor.layout());
// tran_lod_tensor->set_layout(in_lod_tensor.layout());
// tran_lod_tensor->ShareDataWith(tensor);
// tran_lod_tensor->ShareDataWith(tensor);
// } else if (in_var.IsType<SelectedRows>()) {
// } else if (in_var.IsType<SelectedRows>()) {
// auto& in_selected_rows = in_var.Get<SelectedRows>();
// auto& in_selected_rows = in_var.Get<SelectedRows>();
// auto* trans_selected_rows = out_var.GetMutable<SelectedRows>();
// auto* trans_selected_rows =
// trans_selected_rows->set_height(in_selected_rows.height());
// out_var.GetMutable<SelectedRows>();
// trans_selected_rows->set_rows(in_selected_rows.rows());
// trans_selected_rows->set_height(in_selected_rows.height());
// trans_selected_rows->mutable_value()->ShareDataWith(tensor);
// trans_selected_rows->set_rows(in_selected_rows.rows());
// } else {
// trans_selected_rows->mutable_value()->ShareDataWith(tensor);
// PADDLE_THROW("unknown var type");
// } else {
// }
// PADDLE_THROW("unknown var type");
}
// }
}
}
// namespace framework
}
// namespace framework
}
// namespace paddle_mobile
}
// namespace paddle_mobile
src/framework/data_transform.h
浏览文件 @
aef98218
...
@@ -28,14 +28,14 @@ SOFTWARE.
...
@@ -28,14 +28,14 @@ SOFTWARE.
#include "variable.h"
#include "variable.h"
namespace
paddle_mobile
{
namespace
paddle_mobile
{
namespace
framework
{
namespace
framework
{
void
DataTransform
(
const
OpKernelType
&
expected_kernel_type
,
void
DataTransform
(
const
OpKernelType
&
expected_kernel_type
,
const
OpKernelType
&
kernel_type_for_var
,
const
OpKernelType
&
kernel_type_for_var
,
const
Tensor
&
input_tensor
,
Tensor
*
out
);
const
Tensor
&
input_tensor
,
Tensor
*
out
);
void
CopyVariableWithTensor
(
const
Variable
&
in_var
,
const
Tensor
&
tenso
r
,
void
CopyVariableWithTensor
(
const
Variable
&
in_va
r
,
Variable
&
out_var
);
const
Tensor
&
tensor
,
Variable
&
out_var
);
}
// namespace framework
}
// namespace framework
}
// namespace paddle_mobile
}
// namespace paddle_mobile
src/framework/data_type.h
浏览文件 @
aef98218
...
@@ -21,23 +21,23 @@ SOFTWARE.
...
@@ -21,23 +21,23 @@ SOFTWARE.
#include "framework.pb.h"
#include "framework.pb.h"
namespace
paddle_mobile
{
namespace
paddle_mobile
{
namespace
framework
{
namespace
framework
{
// inline proto::VarType::Type ToDataType(std::type_index type) {
// inline proto::VarType::Type ToDataType(std::type_index type) {
// using namespace paddle_mobile::framework::proto;
// using namespace paddle_mobile::framework::proto;
// if (typeid(float).hash_code() == type.hash_code()) {
// if (typeid(float).hash_code() == type.hash_code()) {
// return proto::VarType::FP32;
// return proto::VarType::FP32;
// } else if (typeid(double).hash_code() == type.hash_code()) {
// } else if (typeid(double).hash_code() == type.hash_code()) {
// return proto::VarType::FP64;
// return proto::VarType::FP64;
// } else if (typeid(int).hash_code() == type.hash_code()) {
// } else if (typeid(int).hash_code() == type.hash_code()) {
// return proto::VarType::INT32;
// return proto::VarType::INT32;
// } else if (typeid(int64_t).hash_code() == type.hash_code()) {
// } else if (typeid(int64_t).hash_code() == type.hash_code()) {
// return proto::VarType::INT64;
// return proto::VarType::INT64;
// } else if (typeid(bool).hash_code() == type.hash_code()) {
// } else if (typeid(bool).hash_code() == type.hash_code()) {
// return proto::VarType::BOOL;
// return proto::VarType::BOOL;
// } else {
// } else {
//// PADDLE_THROW("Not supported");
//// PADDLE_THROW("Not supported");
// }
// }
// }
// }
}
}
}
// namespace paddle_mobile
}
// namespace paddle_mobile
src/framework/ddim.cc
浏览文件 @
aef98218
...
@@ -15,311 +15,320 @@ limitations under the License. */
...
@@ -15,311 +15,320 @@ limitations under the License. */
#include "ddim.h"
#include "ddim.h"
namespace
paddle_mobile
{
namespace
paddle_mobile
{
namespace
framework
{
namespace
framework
{
/// @cond HIDDEN
/// @cond HIDDEN
template
<
int
i
>
Dim
<
i
>
make_dim
(
const
int64_t
*
d
)
{
template
<
int
i
>
Dim
<
i
>
make_dim
(
const
int64_t
*
d
)
{
return
Dim
<
i
>
(
*
d
,
make_dim
<
i
-
1
>
(
d
+
1
));
return
Dim
<
i
>
(
*
d
,
make_dim
<
i
-
1
>
(
d
+
1
));
}
}
template
<
>
Dim
<
0
>
make_dim
<
0
>
(
const
int64_t
*
d
)
{
return
Dim
<
0
>
(
*
d
);
}
template
<
>
Dim
<
0
>
make_dim
<
0
>
(
const
int64_t
*
d
)
{
return
Dim
<
0
>
(
*
d
);
}
void
make_ddim
(
DDim
&
ddim
,
const
int64_t
*
dims
,
int
n
)
{
void
make_ddim
(
DDim
&
ddim
,
const
int64_t
*
dims
,
int
n
)
{
switch
(
n
)
{
switch
(
n
)
{
case
0
:
case
0
:
ddim
=
make_dim
<
0
>
(
dims
);
ddim
=
make_dim
<
0
>
(
dims
);
break
;
break
;
case
1
:
case
1
:
ddim
=
make_dim
<
1
>
(
dims
);
ddim
=
make_dim
<
1
>
(
dims
);
break
;
break
;
case
2
:
case
2
:
ddim
=
make_dim
<
2
>
(
dims
);
ddim
=
make_dim
<
2
>
(
dims
);
break
;
break
;
case
3
:
case
3
:
ddim
=
make_dim
<
3
>
(
dims
);
ddim
=
make_dim
<
3
>
(
dims
);
break
;
break
;
case
4
:
case
4
:
ddim
=
make_dim
<
4
>
(
dims
);
ddim
=
make_dim
<
4
>
(
dims
);
break
;
break
;
case
5
:
case
5
:
ddim
=
make_dim
<
5
>
(
dims
);
ddim
=
make_dim
<
5
>
(
dims
);
break
;
break
;
case
6
:
case
6
:
ddim
=
make_dim
<
6
>
(
dims
);
ddim
=
make_dim
<
6
>
(
dims
);
break
;
break
;
case
7
:
case
7
:
ddim
=
make_dim
<
7
>
(
dims
);
ddim
=
make_dim
<
7
>
(
dims
);
break
;
break
;
case
8
:
case
8
:
ddim
=
make_dim
<
8
>
(
dims
);
ddim
=
make_dim
<
8
>
(
dims
);
break
;
break
;
case
9
:
case
9
:
ddim
=
make_dim
<
9
>
(
dims
);
ddim
=
make_dim
<
9
>
(
dims
);
break
;
break
;
default:
default:
// std::cout << "Dynamic dimensions must have between [1, 9]
// std::cout << "Dynamic dimensions must have between [1,
// dimensions.";
// 9]
break
;
// dimensions.";
}
break
;
}
}
}
/// @endcond
/// @endcond
DDim
make_ddim
(
std
::
initializer_list
<
int64_t
>
dims
)
{
DDim
result
(
make_dim
(
0
));
DDim
make_ddim
(
std
::
initializer_list
<
int64_t
>
dims
)
{
make_ddim
(
result
,
dims
.
begin
(),
dims
.
size
());
DDim
result
(
make_dim
(
0
));
return
result
;
make_ddim
(
result
,
dims
.
begin
(),
dims
.
size
());
}
return
result
;
}
DDim
make_ddim
(
const
std
::
vector
<
int64_t
>
&
dims
)
{
DDim
result
(
make_dim
(
0
));
DDim
make_ddim
(
const
std
::
vector
<
int64_t
>
&
dims
)
{
make_ddim
(
result
,
&
dims
[
0
],
dims
.
size
());
DDim
result
(
make_dim
(
0
));
return
result
;
make_ddim
(
result
,
&
dims
[
0
],
dims
.
size
());
}
return
result
;
}
DDim
make_ddim
(
const
std
::
vector
<
int
>
&
dims
)
{
std
::
vector
<
int64_t
>
res
(
dims
.
size
());
DDim
make_ddim
(
const
std
::
vector
<
int
>
&
dims
)
{
std
::
transform
(
dims
.
begin
(),
dims
.
end
(),
res
.
begin
(),
std
::
vector
<
int64_t
>
res
(
dims
.
size
());
[](
int
d
)
{
return
static_cast
<
int64_t
>
(
d
);
});
std
::
transform
(
dims
.
begin
(),
dims
.
end
(),
res
.
begin
(),
return
make_ddim
(
res
);
[](
int
d
)
{
return
static_cast
<
int64_t
>
(
d
);
});
}
return
make_ddim
(
res
);
}
/// @cond HIDDEN
// XXX For some reason, putting this in an anonymous namespace causes errors
/// @cond HIDDEN
struct
DynamicMutableIndexer
:
Vistor
<
int64_t
&>
{
// XXX For some reason, putting this in an anonymous namespace causes
public:
// errors
explicit
DynamicMutableIndexer
(
int
idx
)
:
idx_
(
idx
)
{}
struct
DynamicMutableIndexer
:
Vistor
<
int64_t
&>
{
public:
template
<
int
D
>
int64_t
&
operator
()(
Dim
<
D
>
&
dim
)
const
{
return
dim
[
idx_
];
}
explicit
DynamicMutableIndexer
(
int
idx
)
:
idx_
(
idx
)
{}
private:
template
<
int
D
>
int64_t
&
operator
()(
Dim
<
D
>
&
dim
)
const
{
int
idx_
;
return
dim
[
idx_
];
};
}
struct
DynamicConstIndexer
:
public
Vistor
<
int64_t
>
{
private:
public:
int
idx_
;
explicit
DynamicConstIndexer
(
int
idx
)
:
idx_
(
idx
)
{}
};
template
<
int
D
>
int64_t
operator
()(
const
Dim
<
D
>
&
dim
)
const
{
struct
DynamicConstIndexer
:
public
Vistor
<
int64_t
>
{
return
dim
[
idx_
];
public:
}
explicit
DynamicConstIndexer
(
int
idx
)
:
idx_
(
idx
)
{}
private:
template
<
int
D
>
int64_t
operator
()(
const
Dim
<
D
>
&
dim
)
const
{
int
idx_
;
return
dim
[
idx_
];
};
}
/// @endcond
private:
int
idx_
;
int64_t
&
DDim
::
operator
[](
int
idx
)
{
};
return
DDim
::
ApplyVistor
(
DynamicMutableIndexer
(
idx
),
*
this
);
}
/// @endcond
int64_t
DDim
::
operator
[](
int
idx
)
const
{
int64_t
&
DDim
::
operator
[](
int
idx
)
{
return
DDim
::
ApplyVistor
(
DynamicConstIndexer
(
idx
),
*
this
);
return
DDim
::
ApplyVistor
(
DynamicMutableIndexer
(
idx
),
*
this
);
}
}
int
DDim
::
size
()
const
{
return
arity
(
*
this
);
}
int64_t
DDim
::
operator
[](
int
idx
)
const
{
return
DDim
::
ApplyVistor
(
DynamicConstIndexer
(
idx
),
*
this
);
bool
DDim
::
operator
==
(
DDim
d
)
const
{
}
// if (var.which() != d.getVar().which()) {
// return false;
int
DDim
::
size
()
const
{
return
arity
(
*
this
);
}
// } else {
std
::
vector
<
int64_t
>
v1
=
vectorize
(
*
this
);
bool
DDim
::
operator
==
(
DDim
d
)
const
{
std
::
vector
<
int64_t
>
v2
=
vectorize
(
d
);
// if (var.which() != d.getVar().which()) {
// return false;
for
(
unsigned
int
i
=
0
;
i
<
v1
.
size
();
i
++
)
{
// } else {
if
(
v1
[
i
]
!=
v2
[
i
])
{
std
::
vector
<
int64_t
>
v1
=
vectorize
(
*
this
);
return
false
;
std
::
vector
<
int64_t
>
v2
=
vectorize
(
d
);
}
}
for
(
unsigned
int
i
=
0
;
i
<
v1
.
size
();
i
++
)
{
if
(
v1
[
i
]
!=
v2
[
i
])
{
return
true
;
return
false
;
// }
}
}
}
bool
DDim
::
operator
!=
(
DDim
d
)
const
{
return
!
(
*
this
==
d
);
}
return
true
;
// }
DDim
DDim
::
operator
+
(
DDim
d
)
const
{
}
std
::
vector
<
int64_t
>
v1
=
vectorize
(
*
this
);
std
::
vector
<
int64_t
>
v2
=
vectorize
(
d
);
bool
DDim
::
operator
!=
(
DDim
d
)
const
{
return
!
(
*
this
==
d
);
}
std
::
vector
<
int64_t
>
v3
;
DDim
DDim
::
operator
+
(
DDim
d
)
const
{
std
::
vector
<
int64_t
>
v1
=
vectorize
(
*
this
);
assert
(
v1
.
size
()
==
v2
.
size
());
std
::
vector
<
int64_t
>
v2
=
vectorize
(
d
);
for
(
unsigned
int
i
=
0
;
i
<
v1
.
size
();
i
++
)
{
std
::
vector
<
int64_t
>
v3
;
v3
.
push_back
(
v1
[
i
]
+
v2
[
i
]);
}
assert
(
v1
.
size
()
==
v2
.
size
());
return
make_ddim
(
v3
);
for
(
unsigned
int
i
=
0
;
i
<
v1
.
size
();
i
++
)
{
}
v3
.
push_back
(
v1
[
i
]
+
v2
[
i
]);
}
DDim
DDim
::
operator
*
(
DDim
d
)
const
{
std
::
vector
<
int64_t
>
v1
=
vectorize
(
*
this
);
return
make_ddim
(
v3
);
std
::
vector
<
int64_t
>
v2
=
vectorize
(
d
);
}
std
::
vector
<
int64_t
>
v3
;
DDim
DDim
::
operator
*
(
DDim
d
)
const
{
std
::
vector
<
int64_t
>
v1
=
vectorize
(
*
this
);
assert
(
v1
.
size
()
==
v2
.
size
());
std
::
vector
<
int64_t
>
v2
=
vectorize
(
d
);
for
(
unsigned
int
i
=
0
;
i
<
v1
.
size
();
i
++
)
{
std
::
vector
<
int64_t
>
v3
;
v3
.
push_back
(
v1
[
i
]
*
v2
[
i
]);
}
assert
(
v1
.
size
()
==
v2
.
size
());
return
make_ddim
(
v3
);
for
(
unsigned
int
i
=
0
;
i
<
v1
.
size
();
i
++
)
{
}
v3
.
push_back
(
v1
[
i
]
*
v2
[
i
]);
}
int64_t
get
(
const
DDim
&
ddim
,
int
idx
)
{
return
ddim
[
idx
];
}
return
make_ddim
(
v3
);
void
set
(
DDim
&
ddim
,
int
idx
,
int
value
)
{
ddim
[
idx
]
=
value
;
}
}
/// @cond HIDDEN
int64_t
get
(
const
DDim
&
ddim
,
int
idx
)
{
return
ddim
[
idx
];
}
struct
VectorizeVisitor
:
Vistor
<
void
>
{
std
::
vector
<
int64_t
>
&
vector
;
void
set
(
DDim
&
ddim
,
int
idx
,
int
value
)
{
ddim
[
idx
]
=
value
;
}
explicit
VectorizeVisitor
(
std
::
vector
<
int64_t
>
&
v
)
:
vector
(
v
)
{}
/// @cond HIDDEN
struct
VectorizeVisitor
:
Vistor
<
void
>
{
template
<
typename
T
>
void
operator
()(
const
T
&
t
)
{
std
::
vector
<
int64_t
>
&
vector
;
vector
.
push_back
(
t
.
head
);
this
->
operator
()(
t
.
tail
);
explicit
VectorizeVisitor
(
std
::
vector
<
int64_t
>
&
v
)
:
vector
(
v
)
{}
}
template
<
typename
T
>
void
operator
()(
const
T
&
t
)
{
void
operator
()(
const
Dim
<
0
>
&
t
)
{}
vector
.
push_back
(
t
.
head
);
};
this
->
operator
()(
t
.
tail
);
/// @endcond
}
std
::
vector
<
int64_t
>
vectorize
(
const
DDim
&
ddim
)
{
void
operator
()(
const
Dim
<
0
>
&
t
)
{}
std
::
vector
<
int64_t
>
result
;
};
VectorizeVisitor
visitor
(
result
);
/// @endcond
DDim
::
ApplyVistor
(
visitor
,
ddim
);
return
result
;
std
::
vector
<
int64_t
>
vectorize
(
const
DDim
&
ddim
)
{
}
std
::
vector
<
int64_t
>
result
;
VectorizeVisitor
visitor
(
result
);
// NOTE: framework::vectorize converts to type int64_t
DDim
::
ApplyVistor
(
visitor
,
ddim
);
// which does not fit cudnn inputs.
return
result
;
std
::
vector
<
int
>
vectorize2int
(
const
DDim
&
ddim
)
{
}
std
::
vector
<
int64_t
>
temp
=
vectorize
(
ddim
);
std
::
vector
<
int
>
result
(
temp
.
begin
(),
temp
.
end
());
// NOTE: framework::vectorize converts to type int64_t
return
result
;
// which does not fit cudnn inputs.
}
std
::
vector
<
int
>
vectorize2int
(
const
DDim
&
ddim
)
{
std
::
vector
<
int64_t
>
temp
=
vectorize
(
ddim
);
struct
ProductVisitor
:
Vistor
<
int64_t
>
{
std
::
vector
<
int
>
result
(
temp
.
begin
(),
temp
.
end
());
template
<
int
D
>
int64_t
operator
()(
const
Dim
<
D
>
&
dim
)
{
return
result
;
return
product
(
dim
);
}
}
};
struct
ProductVisitor
:
Vistor
<
int64_t
>
{
template
<
int
D
>
int64_t
operator
()(
const
Dim
<
D
>
&
dim
)
{
int64_t
product
(
const
DDim
&
ddim
)
{
return
product
(
dim
);
ProductVisitor
visitor
;
}
return
DDim
::
ApplyVistor
(
visitor
,
ddim
);
};
}
int64_t
product
(
const
DDim
&
ddim
)
{
struct
SliceVectorizeVisitor
:
Vistor
<
void
>
{
ProductVisitor
visitor
;
std
::
vector
<
int64_t
>
&
vector
;
return
DDim
::
ApplyVistor
(
visitor
,
ddim
);
int
begin
;
}
int
end
;
struct
SliceVectorizeVisitor
:
Vistor
<
void
>
{
SliceVectorizeVisitor
(
std
::
vector
<
int64_t
>
&
v
,
int
b
,
int
e
)
std
::
vector
<
int64_t
>
&
vector
;
:
vector
(
v
),
begin
(
b
),
end
(
e
)
{
int
begin
;
// PADDLE_ENFORCE(begin < end,
int
end
;
// "Begin index must be less than end index in ddim
// slice.");
SliceVectorizeVisitor
(
std
::
vector
<
int64_t
>
&
v
,
int
b
,
int
e
)
// PADDLE_ENFORCE(begin >= 0,
:
vector
(
v
),
begin
(
b
),
end
(
e
)
{
// "Begin index can't be less than zero in ddim slice.");
// PADDLE_ENFORCE(begin < end,
}
// "Begin index must be less than end index in
// ddim
template
<
int
S
>
void
operator
()(
const
Dim
<
S
>
&
dim
)
{
// slice.");
if
(
begin
==
0
)
{
// PADDLE_ENFORCE(begin >= 0,
vector
.
push_back
(
dim
.
head
);
// "Begin index can't be less than zero in
}
else
{
// ddim slice.");
--
begin
;
}
}
--
end
;
template
<
int
S
>
void
operator
()(
const
Dim
<
S
>
&
dim
)
{
if
(
end
>
0
)
{
if
(
begin
==
0
)
{
this
->
operator
()(
dim
.
tail
);
vector
.
push_back
(
dim
.
head
);
}
}
else
{
}
--
begin
;
}
void
operator
()(
const
Dim
<
0
>
&
dim
)
{
--
end
;
// PADDLE_ENFORCE(end == 0, "End index in ddim slice is out of bound.");
if
(
end
>
0
)
{
}
this
->
operator
()(
dim
.
tail
);
};
}
}
DDim
slice_ddim
(
const
DDim
&
ddim
,
int
begin
,
int
end
)
{
std
::
vector
<
int64_t
>
vec
;
void
operator
()(
const
Dim
<
0
>
&
dim
)
{
vec
.
reserve
(
end
-
begin
);
// PADDLE_ENFORCE(end == 0, "End index in ddim slice is out
SliceVectorizeVisitor
visitor
(
vec
,
begin
,
end
);
// of bound.");
// boost::apply_visitor(visitor, dim);
}
DDim
::
ApplyVistor
(
visitor
,
ddim
);
};
// visitor(ddim.var.Get<Dim<4>>());
return
make_ddim
(
vec
);
DDim
slice_ddim
(
const
DDim
&
ddim
,
int
begin
,
int
end
)
{
}
std
::
vector
<
int64_t
>
vec
;
vec
.
reserve
(
end
-
begin
);
/// \cond HIDDEN
SliceVectorizeVisitor
visitor
(
vec
,
begin
,
end
);
// boost::apply_visitor(visitor, dim);
struct
ArityVisitor
:
Vistor
<
int
>
{
DDim
::
ApplyVistor
(
visitor
,
ddim
);
template
<
int
D
>
int
operator
()(
Dim
<
D
>
)
const
{
return
D
;
}
// visitor(ddim.var.Get<Dim<4>>());
};
return
make_ddim
(
vec
);
}
/// \endcond
/// \cond HIDDEN
int
arity
(
const
DDim
&
d
)
{
ArityVisitor
arityVisitor
=
ArityVisitor
();
struct
ArityVisitor
:
Vistor
<
int
>
{
return
DDim
::
ApplyVistor
(
arityVisitor
,
d
);
template
<
int
D
>
int
operator
()(
Dim
<
D
>
)
const
{
return
D
;
}
// return arityVisitor(d.var.Get<Dim<4>>());
};
// return boost::apply_visitor(ArityVisitor(), d); }
}
/// \endcond
/// \cond HIDDEN
int
arity
(
const
DDim
&
d
)
{
/// \endcond
ArityVisitor
arityVisitor
=
ArityVisitor
();
return
DDim
::
ApplyVistor
(
arityVisitor
,
d
);
struct
OSVistor
:
Vistor
<
std
::
ostream
&>
{
// return arityVisitor(d.var.Get<Dim<4>>());
OSVistor
(
std
::
ostream
&
os
)
:
os_
(
os
)
{}
// return boost::apply_visitor(ArityVisitor(), d); }
}
template
<
int
D
>
std
::
ostream
&
operator
()(
Dim
<
D
>
dim
)
const
{
/// \cond HIDDEN
return
os_
<<
dim
;
}
/// \endcond
private:
struct
OSVistor
:
Vistor
<
std
::
ostream
&>
{
std
::
ostream
&
os_
;
OSVistor
(
std
::
ostream
&
os
)
:
os_
(
os
)
{}
};
template
<
int
D
>
std
::
ostream
&
operator
()(
Dim
<
D
>
dim
)
const
{
std
::
ostream
&
operator
<<
(
std
::
ostream
&
os
,
const
DDim
&
ddim
)
{
return
os_
<<
dim
;
auto
vistor
=
OSVistor
(
os
);
}
DDim
::
ApplyVistor
(
vistor
,
ddim
);
return
os
;
private:
}
std
::
ostream
&
os_
;
};
DDim
::
DDim
(
std
::
initializer_list
<
int64_t
>
init_list
)
{
*
this
=
make_ddim
(
init_list
);
std
::
ostream
&
operator
<<
(
std
::
ostream
&
os
,
const
DDim
&
ddim
)
{
}
auto
vistor
=
OSVistor
(
os
);
DDim
::
ApplyVistor
(
vistor
,
ddim
);
DDim
flatten_to_2d
(
const
DDim
&
src
,
int
num_col_dims
)
{
return
os
;
int
rank
=
src
.
size
();
}
return
make_ddim
({
product
(
slice_ddim
(
src
,
0
,
num_col_dims
)),
product
(
slice_ddim
(
src
,
num_col_dims
,
rank
))});
DDim
::
DDim
(
std
::
initializer_list
<
int64_t
>
init_list
)
{
}
*
this
=
make_ddim
(
init_list
);
}
DDim
flatten_to_1d
(
const
DDim
&
src
)
{
return
make_ddim
({
product
(
src
)});
}
DDim
flatten_to_2d
(
const
DDim
&
src
,
int
num_col_dims
)
{
DDim
stride
(
const
DDim
&
ddim
)
{
int
rank
=
src
.
size
();
std
::
vector
<
int64_t
>
strides
(
ddim
.
size
());
return
make_ddim
({
product
(
slice_ddim
(
src
,
0
,
num_col_dims
)),
strides
[
ddim
.
size
()
-
1
]
=
1
;
product
(
slice_ddim
(
src
,
num_col_dims
,
rank
))});
for
(
int
i
=
ddim
.
size
()
-
2
;
i
>=
0
;
--
i
)
{
}
strides
[
i
]
=
strides
[
i
+
1
]
*
ddim
[
i
+
1
];
}
DDim
flatten_to_1d
(
const
DDim
&
src
)
{
return
framework
::
make_ddim
(
strides
);
return
make_ddim
({
product
(
src
)});
}
}
DDim
stride_numel
(
const
framework
::
DDim
&
ddim
)
{
DDim
stride
(
const
DDim
&
ddim
)
{
std
::
vector
<
int64_t
>
strides
(
ddim
.
size
());
std
::
vector
<
int64_t
>
strides
(
ddim
.
size
());
strides
[
ddim
.
size
()
-
1
]
=
ddim
[
ddim
.
size
()
-
1
];
strides
[
ddim
.
size
()
-
1
]
=
1
;
for
(
int
i
=
ddim
.
size
()
-
2
;
i
>=
0
;
--
i
)
{
for
(
int
i
=
ddim
.
size
()
-
2
;
i
>=
0
;
--
i
)
{
strides
[
i
]
=
strides
[
i
+
1
]
*
ddim
[
i
];
strides
[
i
]
=
strides
[
i
+
1
]
*
ddim
[
i
+
1
];
}
}
return
framework
::
make_ddim
(
strides
);
return
framework
::
make_ddim
(
strides
);
}
}
}
// namespace framework
DDim
stride_numel
(
const
framework
::
DDim
&
ddim
)
{
std
::
vector
<
int64_t
>
strides
(
ddim
.
size
());
strides
[
ddim
.
size
()
-
1
]
=
ddim
[
ddim
.
size
()
-
1
];
for
(
int
i
=
ddim
.
size
()
-
2
;
i
>=
0
;
--
i
)
{
strides
[
i
]
=
strides
[
i
+
1
]
*
ddim
[
i
];
}
return
framework
::
make_ddim
(
strides
);
}
}
// namespace framework
}
// namespace paddle_mobile
}
// namespace paddle_mobile
src/framework/ddim.h
浏览文件 @
aef98218
...
@@ -22,140 +22,145 @@ limitations under the License. */
...
@@ -22,140 +22,145 @@ limitations under the License. */
#include <vector>
#include <vector>
namespace
paddle_mobile
{
namespace
paddle_mobile
{
namespace
framework
{
namespace
framework
{
/**
/**
* \brief A dynamically sized dimension.
* \brief A dynamically sized dimension.
*
*
* The number of dimensions must be between [1, 9].
* The number of dimensions must be between [1, 9].
*/
*/
struct
DDim
{
struct
DDim
{
typedef
Variant
<
Dim
<
0
>
,
Dim
<
1
>
,
Dim
<
2
>
,
Dim
<
3
>
,
Dim
<
4
>
,
Dim
<
5
>
,
Dim
<
6
>
,
typedef
Variant
<
Dim
<
0
>
,
Dim
<
1
>
,
Dim
<
2
>
,
Dim
<
3
>
,
Dim
<
4
>
,
Dim
<
5
>
,
Dim
<
7
>
,
Dim
<
8
>
,
Dim
<
9
>>
Dim
<
6
>
,
Dim
<
7
>
,
Dim
<
8
>
,
Dim
<
9
>>
DDimVar
;
DDimVar
;
DDimVar
var
;
DDimVar
var
;
template
<
typename
Vistor
>
template
<
typename
Vistor
>
static
typename
Vistor
::
type_t
ApplyVistor
(
Vistor
vistor
,
const
DDim
&
d
)
{
static
typename
Vistor
::
type_t
ApplyVistor
(
Vistor
vistor
,
if
(
d
.
var
.
TypeId
()
==
typeid
(
Dim
<
0
>
).
hash_code
())
{
const
DDim
&
d
)
{
return
vistor
(
d
.
var
.
Get
<
Dim
<
0
>>
());
if
(
d
.
var
.
TypeId
()
==
typeid
(
Dim
<
0
>
).
hash_code
())
{
}
else
if
(
d
.
var
.
TypeId
()
==
typeid
(
Dim
<
1
>
).
hash_code
())
{
return
vistor
(
d
.
var
.
Get
<
Dim
<
0
>>
());
return
vistor
(
d
.
var
.
Get
<
Dim
<
1
>>
());
}
else
if
(
d
.
var
.
TypeId
()
==
typeid
(
Dim
<
1
>
).
hash_code
())
{
}
else
if
(
d
.
var
.
TypeId
()
==
typeid
(
Dim
<
2
>
).
hash_code
())
{
return
vistor
(
d
.
var
.
Get
<
Dim
<
1
>>
());
return
vistor
(
d
.
var
.
Get
<
Dim
<
2
>>
());
}
else
if
(
d
.
var
.
TypeId
()
==
typeid
(
Dim
<
2
>
).
hash_code
())
{
}
else
if
(
d
.
var
.
TypeId
()
==
typeid
(
Dim
<
3
>
).
hash_code
())
{
return
vistor
(
d
.
var
.
Get
<
Dim
<
2
>>
());
return
vistor
(
d
.
var
.
Get
<
Dim
<
3
>>
());
}
else
if
(
d
.
var
.
TypeId
()
==
typeid
(
Dim
<
3
>
).
hash_code
())
{
}
else
if
(
d
.
var
.
TypeId
()
==
typeid
(
Dim
<
4
>
).
hash_code
())
{
return
vistor
(
d
.
var
.
Get
<
Dim
<
3
>>
());
return
vistor
(
d
.
var
.
Get
<
Dim
<
4
>>
());
}
else
if
(
d
.
var
.
TypeId
()
==
typeid
(
Dim
<
4
>
).
hash_code
())
{
}
else
if
(
d
.
var
.
TypeId
()
==
typeid
(
Dim
<
5
>
).
hash_code
())
{
return
vistor
(
d
.
var
.
Get
<
Dim
<
4
>>
());
return
vistor
(
d
.
var
.
Get
<
Dim
<
5
>>
());
}
else
if
(
d
.
var
.
TypeId
()
==
typeid
(
Dim
<
5
>
).
hash_code
())
{
}
else
if
(
d
.
var
.
TypeId
()
==
typeid
(
Dim
<
6
>
).
hash_code
())
{
return
vistor
(
d
.
var
.
Get
<
Dim
<
5
>>
());
return
vistor
(
d
.
var
.
Get
<
Dim
<
6
>>
());
}
else
if
(
d
.
var
.
TypeId
()
==
typeid
(
Dim
<
6
>
).
hash_code
())
{
}
else
if
(
d
.
var
.
TypeId
()
==
typeid
(
Dim
<
7
>
).
hash_code
())
{
return
vistor
(
d
.
var
.
Get
<
Dim
<
6
>>
());
return
vistor
(
d
.
var
.
Get
<
Dim
<
7
>>
());
}
else
if
(
d
.
var
.
TypeId
()
==
typeid
(
Dim
<
7
>
).
hash_code
())
{
}
else
if
(
d
.
var
.
TypeId
()
==
typeid
(
Dim
<
8
>
).
hash_code
())
{
return
vistor
(
d
.
var
.
Get
<
Dim
<
7
>>
());
return
vistor
(
d
.
var
.
Get
<
Dim
<
8
>>
());
}
else
if
(
d
.
var
.
TypeId
()
==
typeid
(
Dim
<
8
>
).
hash_code
())
{
}
else
if
(
d
.
var
.
TypeId
()
==
typeid
(
Dim
<
9
>
).
hash_code
())
{
return
vistor
(
d
.
var
.
Get
<
Dim
<
8
>>
());
return
vistor
(
d
.
var
.
Get
<
Dim
<
9
>>
());
}
else
if
(
d
.
var
.
TypeId
()
==
typeid
(
Dim
<
9
>
).
hash_code
())
{
}
else
{
return
vistor
(
d
.
var
.
Get
<
Dim
<
9
>>
());
printf
(
" dim not support
\n
"
);
}
else
{
throw
std
::
bad_exception
();
printf
(
" dim not support
\n
"
);
// return typename Vistor::type_t();
throw
std
::
bad_exception
();
}
// return typename Vistor::type_t();
}
}
}
DDim
()
{
var
.
Set
<
Dim
<
1
>>
(
Dim
<
1
>
());
}
DDim
()
{
var
.
Set
<
Dim
<
1
>>
(
Dim
<
1
>
());
}
template
<
int
D
>
explicit
DDim
(
const
Dim
<
D
>
&
in
)
{
var
.
Set
<
Dim
<
D
>>
(
in
);
}
template
<
int
D
>
explicit
DDim
(
const
Dim
<
D
>
&
in
)
{
/*implicit*/
DDim
(
std
::
initializer_list
<
int64_t
>
init_list
);
var
.
Set
<
Dim
<
D
>>
(
in
);
}
template
<
int
D
>
DDim
&
operator
=
(
const
Dim
<
D
>
&
in
)
{
var
.
Set
<
Dim
<
D
>>
(
in
);
/*implicit*/
DDim
(
std
::
initializer_list
<
int64_t
>
init_list
);
return
*
this
;
}
template
<
int
D
>
DDim
&
operator
=
(
const
Dim
<
D
>
&
in
)
{
var
.
Set
<
Dim
<
D
>>
(
in
);
int64_t
&
operator
[](
int
idx
);
return
*
this
;
}
int64_t
operator
[](
int
idx
)
const
;
int64_t
&
operator
[](
int
idx
);
// template <typename Visitor>
// typename Visitor::result_type apply_visitor(Visitor& visitor) {
int64_t
operator
[](
int
idx
)
const
;
// return var.apply_visitor(visitor);
// }
// template <typename Visitor>
//
// typename Visitor::result_type apply_visitor(Visitor& visitor) {
// template <typename Visitor>
// return var.apply_visitor(visitor);
// typename Visitor::result_type apply_visitor(Visitor& visitor) const {
// }
// return var.apply_visitor(visitor);
//
// }
// template <typename Visitor>
// typename Visitor::result_type apply_visitor(Visitor& visitor)
DDimVar
getVar
()
{
return
var
;
}
// const {
// return var.apply_visitor(visitor);
bool
operator
==
(
DDim
d
)
const
;
// }
bool
operator
!=
(
DDim
d
)
const
;
DDimVar
getVar
()
{
return
var
;
}
DDim
operator
+
(
DDim
d
)
const
;
bool
operator
==
(
DDim
d
)
const
;
DDim
operator
*
(
DDim
d
)
const
;
bool
operator
!=
(
DDim
d
)
const
;
int
size
()
const
;
DDim
operator
+
(
DDim
d
)
const
;
};
DDim
operator
*
(
DDim
d
)
const
;
/**
* \brief Make a DDim from std::vector<int64_t>
int
size
()
const
;
*
};
* \param dims An vector of ints. Must be sized between [1, 9]
*/
/**
DDim
make_ddim
(
const
std
::
vector
<
int64_t
>
&
dims
);
* \brief Make a DDim from std::vector<int64_t>
*
DDim
make_ddim
(
const
std
::
vector
<
int
>
&
dims
);
* \param dims An vector of ints. Must be sized between [1, 9]
*/
/**
DDim
make_ddim
(
const
std
::
vector
<
int64_t
>
&
dims
);
* \brief Make a DDim from an initializer list
*
DDim
make_ddim
(
const
std
::
vector
<
int
>
&
dims
);
* \param dims An initializer list of ints. Must be sized between [1, 9]
*
/**
*/
* \brief Make a DDim from an initializer list
DDim
make_ddim
(
std
::
initializer_list
<
int64_t
>
dims
);
*
* \param dims An initializer list of ints. Must be sized between [1, 9]
int64_t
get
(
const
DDim
&
dim
,
int
idx
);
*
*/
void
set
(
DDim
&
dim
,
int
idx
,
int
val
);
DDim
make_ddim
(
std
::
initializer_list
<
int64_t
>
dims
);
std
::
vector
<
int64_t
>
vectorize
(
const
DDim
&
ddim
);
int64_t
get
(
const
DDim
&
dim
,
int
idx
);
std
::
vector
<
int
>
vectorize2int
(
const
DDim
&
ddim
);
void
set
(
DDim
&
dim
,
int
idx
,
int
val
);
int64_t
product
(
const
DDim
&
ddim
);
std
::
vector
<
int64_t
>
vectorize
(
const
DDim
&
ddim
);
/**
std
::
vector
<
int
>
vectorize2int
(
const
DDim
&
ddim
);
* \brief Slice a ddim
*
int64_t
product
(
const
DDim
&
ddim
);
* Slice dim with [begin, end).
* e.g. DDim d = make_ddim({1,2,3,4,5});
/**
* slice_ddim(d, 1, 3); ====> {2,3}
* \brief Slice a ddim
*/
*
DDim
slice_ddim
(
const
DDim
&
dim
,
int
begin
,
int
end
);
* Slice dim with [begin, end).
* e.g. DDim d = make_ddim({1,2,3,4,5});
/**
* slice_ddim(d, 1, 3); ====> {2,3}
* \brief What is the length of this dimension?
*/
*
DDim
slice_ddim
(
const
DDim
&
dim
,
int
begin
,
int
end
);
* \param Dynamic dimension to inspect
*/
/**
* \brief What is the length of this dimension?
*
* \param Dynamic dimension to inspect
*/
int
arity
(
const
DDim
&
ddim
);
int
arity
(
const
DDim
&
ddim
);
std
::
ostream
&
operator
<<
(
std
::
ostream
&
,
const
DDim
&
);
std
::
ostream
&
operator
<<
(
std
::
ostream
&
,
const
DDim
&
);
// Reshape a tensor to a matrix. The matrix's first dimension(column length)
// Reshape a tensor to a matrix. The matrix's first dimension(column
// will be the product of tensor's first `num_col_dims` dimensions.
// length)
DDim
flatten_to_2d
(
const
DDim
&
src
,
int
num_col_dims
);
// will be the product of tensor's first `num_col_dims` dimensions.
DDim
flatten_to_2d
(
const
DDim
&
src
,
int
num_col_dims
);
DDim
flatten_to_1d
(
const
DDim
&
src
);
DDim
flatten_to_1d
(
const
DDim
&
src
);
DDim
stride
(
const
DDim
&
ddim
);
DDim
stride
(
const
DDim
&
ddim
);
DDim
stride_numel
(
const
DDim
&
ddim
);
DDim
stride_numel
(
const
DDim
&
ddim
);
}
// namespace framework
}
// namespace framework
}
// namespace paddle_mobile
}
// namespace paddle_mobile
src/framework/dim.h
浏览文件 @
aef98218
此差异已折叠。
点击以展开。
src/framework/executor.cpp
浏览文件 @
aef98218
...
@@ -23,84 +23,93 @@ SOFTWARE.
...
@@ -23,84 +23,93 @@ SOFTWARE.
#include "variable.h"
#include "variable.h"
namespace
paddle_mobile
{
namespace
paddle_mobile
{
namespace
framework
{
namespace
framework
{
template
<
typename
Dtype
>
template
<
typename
Dtype
>
Executor
<
Dtype
>::
Executor
(
const
Program
<
Dtype
>
p
)
:
program_
(
p
)
{
Executor
<
Dtype
>::
Executor
(
const
Program
<
Dtype
>
p
)
:
program_
(
p
)
{
if
(
use_optimize_
)
{
if
(
use_optimize_
)
{
to_predict_program_
=
program_
.
optimizeProgram
;
to_predict_program_
=
program_
.
optimizeProgram
;
}
else
{
}
else
{
to_predict_program_
=
program_
.
originProgram
;
to_predict_program_
=
program_
.
originProgram
;
}
}
const
std
::
vector
<
std
::
shared_ptr
<
BlockDesc
>>
blocks
=
const
std
::
vector
<
std
::
shared_ptr
<
BlockDesc
>>
blocks
=
to_predict_program_
->
Blocks
();
to_predict_program_
->
Blocks
();
// std::cout << " **block size " << blocks.size() << std::endl;
// std::cout << " **block size " << blocks.size() << std::endl;
for
(
int
i
=
0
;
i
<
blocks
.
size
();
++
i
)
{
for
(
int
i
=
0
;
i
<
blocks
.
size
();
++
i
)
{
std
::
shared_ptr
<
BlockDesc
>
block_desc
=
blocks
[
i
];
std
::
shared_ptr
<
BlockDesc
>
block_desc
=
blocks
[
i
];
std
::
vector
<
std
::
shared_ptr
<
OpDesc
>>
ops
=
block_desc
->
Ops
();
std
::
vector
<
std
::
shared_ptr
<
OpDesc
>>
ops
=
block_desc
->
Ops
();
// std::cout << " ops " << ops.size() << std::endl;
// std::cout << " ops " << ops.size() << std::endl;
for
(
int
j
=
0
;
j
<
ops
.
size
();
++
j
)
{
for
(
int
j
=
0
;
j
<
ops
.
size
();
++
j
)
{
std
::
shared_ptr
<
OpDesc
>
op
=
ops
[
j
];
std
::
shared_ptr
<
OpDesc
>
op
=
ops
[
j
];
// std::cout << " input 0 " << op->Input("Input")[0] << std::endl;
// std::cout << " input 0 " << op->Input("Input")[0]
if
(
op
->
Type
()
==
"conv2d"
&&
op
->
Input
(
"Input"
)[
0
]
==
"pixel"
)
{
// << std::endl;
// std::cout << " conv2d attr size: " << op->GetAttrMap().size()
if
(
op
->
Type
()
==
"conv2d"
&&
// << std::endl;
op
->
Input
(
"Input"
)[
0
]
==
"pixel"
)
{
// std::cout << " input size: " << op->GetInputs().size() <<
// std::cout << " conv2d attr size: " <<
// std::endl;
// op->GetAttrMap().size()
// << std::endl;
// std::cout << " input size: " <<
// op->GetInputs().size() <<
// std::endl;
// std::cout << " output size: " << op->GetOutputs().size() <<
// std::cout << " output size: " <<
// std::endl;
// op->GetOutputs().size() <<
// std::endl;
Attribute
strides_attr
=
op
->
GetAttrMap
().
at
(
"strides"
);
Attribute
strides_attr
=
op
->
GetAttrMap
().
at
(
"strides"
);
std
::
vector
<
int
>
stride
=
strides_attr
.
Get
<
std
::
vector
<
int
>>
();
std
::
vector
<
int
>
stride
=
for
(
int
k
=
0
;
k
<
stride
.
size
();
++
k
)
{
strides_attr
.
Get
<
std
::
vector
<
int
>>
();
// std::cout << " stride " << stride[k] << std::endl;
for
(
int
k
=
0
;
k
<
stride
.
size
();
++
k
)
{
}
// std::cout << " stride " << stride[k] <<
// std::endl;
}
std
::
shared_ptr
<
operators
::
ConvOp
<
Dtype
,
float
>>
conv
=
std
::
shared_ptr
<
operators
::
ConvOp
<
Dtype
,
float
>>
conv
=
std
::
make_shared
<
operators
::
ConvOp
<
Dtype
,
float
>>
(
std
::
make_shared
<
operators
::
ConvOp
<
Dtype
,
float
>>
(
op
->
Type
(),
op
->
GetInputs
(),
op
->
GetOutputs
(),
op
->
GetAttrMap
(),
op
->
Type
(),
op
->
GetInputs
(),
op
->
GetOutputs
(),
program_
.
scope
);
op
->
GetAttrMap
(),
program_
.
scope
);
ops_of_block_
[
*
block_desc
.
get
()].
push_back
(
conv
);
ops_of_block_
[
*
block_desc
.
get
()].
push_back
(
conv
);
}
}
}
}
}
}
}
}
template
<
typename
Dtype
>
template
<
typename
Dtype
>
std
::
shared_ptr
<
Tensor
>
Executor
<
Dtype
>::
predict
(
Tensor
&
t
)
{
std
::
shared_ptr
<
Tensor
>
Executor
<
Dtype
>::
predict
(
Tensor
&
t
)
{
// feed
// feed
auto
scope
=
program_
.
scope
;
auto
scope
=
program_
.
scope
;
Variable
*
g_feed_value
=
scope
->
Var
(
"pixel"
);
Variable
*
g_feed_value
=
scope
->
Var
(
"pixel"
);
auto
tensor
=
g_feed_value
->
GetMutable
<
Tensor
>
();
auto
tensor
=
g_feed_value
->
GetMutable
<
Tensor
>
();
tensor
->
ShareDataWith
(
t
);
tensor
->
ShareDataWith
(
t
);
Variable
*
con_output
=
scope
->
Var
(
"conv2d_0.tmp_0"
);
Variable
*
con_output
=
scope
->
Var
(
"conv2d_0.tmp_0"
);
Tensor
*
output_tensor
=
con_output
->
GetMutable
<
Tensor
>
();
Tensor
*
output_tensor
=
con_output
->
GetMutable
<
Tensor
>
();
output_tensor
->
mutable_data
<
float
>
({
1
,
16
,
32
,
32
});
output_tensor
->
mutable_data
<
float
>
({
1
,
16
,
32
,
32
});
// std::cout << typeid(output_tensor).name() << std::endl;
// std::cout << typeid(output_tensor).name() << std::endl;
// std::cout << "output_tensor dims: " << output_tensor->dims() << std::endl;
// std::cout << "output_tensor dims: " << output_tensor->dims() <<
// std::endl;
std
::
shared_ptr
<
Tensor
>
out_tensor
=
std
::
make_shared
<
LoDTensor
>
();
std
::
shared_ptr
<
Tensor
>
out_tensor
=
std
::
make_shared
<
LoDTensor
>
();
out_tensor
.
reset
(
output_tensor
);
out_tensor
.
reset
(
output_tensor
);
predict
(
t
,
0
);
predict
(
t
,
0
);
return
out_tensor
;
return
out_tensor
;
}
}
template
<
typename
Dtype
>
template
<
typename
Dtype
>
void
Executor
<
Dtype
>::
predict
(
const
Tensor
&
t
,
int
block_id
)
{
void
Executor
<
Dtype
>::
predict
(
const
Tensor
&
t
,
int
block_id
)
{
std
::
shared_ptr
<
BlockDesc
>
to_predict_block
=
std
::
shared_ptr
<
BlockDesc
>
to_predict_block
=
to_predict_program_
->
Block
(
block_id
);
to_predict_program_
->
Block
(
block_id
);
for
(
int
j
=
0
;
j
<
ops_of_block_
[
*
to_predict_block
.
get
()].
size
();
++
j
)
{
for
(
int
j
=
0
;
j
<
ops_of_block_
[
*
to_predict_block
.
get
()].
size
();
auto
op
=
ops_of_block_
[
*
to_predict_block
.
get
()][
j
];
++
j
)
{
// std::cout << "开始run" << std::endl;
auto
op
=
ops_of_block_
[
*
to_predict_block
.
get
()][
j
];
op
->
Run
();
// std::cout << "开始run" << std::endl;
}
op
->
Run
();
}
}
}
template
class
Executor
<
CPU
>;
template
class
Executor
<
CPU
>;
}
// namespace framework
}
// namespace framework
}
// namespace paddle_mobile
}
// namespace paddle_mobile
src/framework/executor.h
浏览文件 @
aef98218
...
@@ -32,22 +32,22 @@ SOFTWARE.
...
@@ -32,22 +32,22 @@ SOFTWARE.
#include "variable.h"
#include "variable.h"
namespace
paddle_mobile
{
namespace
paddle_mobile
{
namespace
framework
{
namespace
framework
{
template
<
typename
Dtype
>
class
Executor
{
template
<
typename
Dtype
>
class
Executor
{
public:
public:
Executor
(
const
Program
<
Dtype
>
p
);
Executor
(
const
Program
<
Dtype
>
p
);
std
::
shared_ptr
<
Tensor
>
predict
(
Tensor
&
t
);
std
::
shared_ptr
<
Tensor
>
predict
(
Tensor
&
t
);
private:
private:
const
framework
::
Program
<
Dtype
>
program_
;
const
framework
::
Program
<
Dtype
>
program_
;
std
::
shared_ptr
<
ProgramDesc
>
to_predict_program_
;
std
::
shared_ptr
<
ProgramDesc
>
to_predict_program_
;
void
predict
(
const
Tensor
&
t
,
int
block_id
);
void
predict
(
const
Tensor
&
t
,
int
block_id
);
std
::
map
<
framework
::
BlockDesc
,
std
::
map
<
framework
::
BlockDesc
,
std
::
vector
<
std
::
shared_ptr
<
OperatorBase
<
Dtype
>>>>
std
::
vector
<
std
::
shared_ptr
<
OperatorBase
<
Dtype
>>>>
ops_of_block_
;
ops_of_block_
;
bool
use_optimize_
=
false
;
bool
use_optimize_
=
false
;
};
};
}
// namespace framework
}
// namespace framework
}
// namespace paddle_mobile
}
// namespace paddle_mobile
src/framework/framework.pb.cpp
浏览文件 @
aef98218
此差异已折叠。
点击以展开。
src/framework/framework.pb.h
浏览文件 @
aef98218
此差异已折叠。
点击以展开。
src/framework/lod_tensor.cc
浏览文件 @
aef98218
此差异已折叠。
点击以展开。
src/framework/lod_tensor.h
浏览文件 @
aef98218
...
@@ -23,178 +23,190 @@ limitations under the License. */
...
@@ -23,178 +23,190 @@ limitations under the License. */
namespace
paddle_mobile
{
namespace
paddle_mobile
{
namespace
framework
{
namespace
framework
{
/*
/*
* LoD is short for Level of Details.
* LoD is short for Level of Details.
*
*
* - in a level, each element indicates relative offset of the lower level
* - in a level, each element indicates relative offset of the lower
* - the first element should be 0 and that indicates that this sequence start
* level
* from 0
* - the first element should be 0 and that indicates that this sequence
* - each sequence's begin and end(no-inclusive) is level[id, id+1]
* start
*
* from 0
* For example:
* - each sequence's begin and end(no-inclusive) is level[id, id+1]
* 3-level LoD stores
*
*
* For example:
* 0 2 3
* 3-level LoD stores
* 0 2 4 7
*
* 0 2 5 7 10 12 15 20
* 0 2 3
*/
* 0 2 4 7
using
LoD
=
std
::
vector
<
std
::
vector
<
size_t
>>
;
* 0 2 5 7 10 12 15 20
*/
std
::
ostream
&
operator
<<
(
std
::
ostream
&
os
,
const
LoD
&
lod
);
using
LoD
=
std
::
vector
<
std
::
vector
<
size_t
>>
;
std
::
ostream
&
operator
<<
(
std
::
ostream
&
os
,
const
LoDTensor
&
t
);
std
::
ostream
&
operator
<<
(
std
::
ostream
&
os
,
const
LoD
&
lod
);
std
::
string
LoDToString
(
const
LoD
&
lod
);
std
::
ostream
&
operator
<<
(
std
::
ostream
&
os
,
const
LoDTensor
&
t
);
LoD
SliceInLevel
(
const
LoD
&
in
,
size_t
level
,
size_t
elem_begin
,
std
::
string
LoDToString
(
const
LoD
&
lod
);
size_t
elem_end
);
LoD
SliceInLevel
(
const
LoD
&
in
,
size_t
level
,
size_t
elem_begin
,
/*
size_t
elem_end
);
* Transform an LoD from relative offsets to absolute offsets.
*/
/*
LoD
ToAbsOffset
(
const
LoD
&
in
);
* Transform an LoD from relative offsets to absolute offsets.
*/
bool
operator
==
(
const
LoD
&
a
,
const
LoD
&
b
);
LoD
ToAbsOffset
(
const
LoD
&
in
);
/*
bool
operator
==
(
const
LoD
&
a
,
const
LoD
&
b
);
* Check whether this lod's format is valid.
*
/*
* ATTENTION:
* Check whether this lod's format is valid.
* - Empty lod is treated as valid.
*
*
* ATTENTION:
* It will check two things:
* - Empty lod is treated as valid.
*
*
* 1. all the offsets in a level should be ascending(no same items allows).
* It will check two things:
* 2. there should be more than 2 offsets existing in each level.
*
* 3. the higher level's last offset should equals the lower level's size-1.
* 1. all the offsets in a level should be ascending(no same items
* 4. the first offset(the begin offset) of each level should be 0.
* allows).
* 5. the lowest level's last offset should equals `tensor_height` if
* 2. there should be more than 2 offsets existing in each level.
* tensor_height>0.
* 3. the higher level's last offset should equals the lower level's
*/
* size-1.
* 4. the first offset(the begin offset) of each level should be 0.
bool
CheckLoD
(
const
LoD
&
in
,
int
tensor_height
=
-
1
);
* 5. the lowest level's last offset should equals `tensor_height` if
* tensor_height>0.
/*
*/
* Check whether this absolute lod's format is valid.
*
bool
CheckLoD
(
const
LoD
&
in
,
int
tensor_height
=
-
1
);
* ATTENTION:
* - Empty lod is treated as valid.
/*
*
* Check whether this absolute lod's format is valid.
* It will check two things:
*
* 1. all the offsets in a level should be ascending(no same items allows)
* ATTENTION:
* 2. there should be more than 2 offsets existing in each level.
* - Empty lod is treated as valid.
* 3. the first offset of each level should be 0, and the last should be the
*
* same(the height of underlying tensor) or `tensor_height` if
* It will check two things:
* tensor_height>0.
* 1. all the offsets in a level should be ascending(no same items
*/
* allows)
bool
CheckAbsLoD
(
const
LoD
&
in
,
int
tensor_height
=
-
1
);
* 2. there should be more than 2 offsets existing in each level.
* 3. the first offset of each level should be 0, and the last should
/*
* be the
* LoDTensor (Level of details Tensor)
* same(the height of underlying tensor) or `tensor_height` if
* see https://en.wikipedia.org/wiki/Level_of_details for reference.
* tensor_height>0.
*/
*/
class
LoDTensor
:
public
Tensor
{
bool
CheckAbsLoD
(
const
LoD
&
in
,
int
tensor_height
=
-
1
);
public:
LoDTensor
()
:
Tensor
()
{}
/*
* LoDTensor (Level of details Tensor)
explicit
LoDTensor
(
const
LoD
&
lod
)
:
lod_
(
lod
)
{}
* see https://en.wikipedia.org/wiki/Level_of_details for reference.
*/
void
set_lod
(
const
LoD
&
lod
)
{
lod_
=
lod
;
}
class
LoDTensor
:
public
Tensor
{
public:
const
LoD
&
lod
()
const
{
return
lod_
;
}
LoDTensor
()
:
Tensor
()
{}
LoD
*
mutable_lod
()
{
return
&
lod_
;
}
explicit
LoDTensor
(
const
LoD
&
lod
)
:
lod_
(
lod
)
{}
/*
void
set_lod
(
const
LoD
&
lod
)
{
lod_
=
lod
;
}
* Get the start offset and end offset of an element from LoD.
*/
const
LoD
&
lod
()
const
{
return
lod_
;
}
std
::
pair
<
size_t
,
size_t
>
lod_element
(
size_t
level
,
size_t
elem
)
const
{
// PADDLE_ENFORCE_LT(level, NumLevels());
LoD
*
mutable_lod
()
{
return
&
lod_
;
}
// PADDLE_ENFORCE_LT(elem, NumElements(level));
return
std
::
make_pair
((
lod_
)[
level
][
elem
],
(
lod_
)[
level
][
elem
+
1
]);
/*
}
* Get the start offset and end offset of an element from LoD.
*/
/*
std
::
pair
<
size_t
,
size_t
>
lod_element
(
size_t
level
,
* Number of LoDTensor's levels, each level has units of data, for example,
size_t
elem
)
const
{
* in the sentence's view, article, paragraph, sentence are 3 levels.
// PADDLE_ENFORCE_LT(level, NumLevels());
*/
// PADDLE_ENFORCE_LT(elem, NumElements(level));
size_t
NumLevels
()
const
{
return
lod_
.
size
();
}
return
std
::
make_pair
((
lod_
)[
level
][
elem
],
(
lod_
)[
level
][
elem
+
1
]);
/*
}
* Number of elements in a level.
*/
/*
size_t
NumElements
(
size_t
level
=
0
)
const
{
* Number of LoDTensor's levels, each level has units of data, for
// PADDLE_ENFORCE_LT(level, NumLevels());
* example,
// the last offset is the end of last element
* in the sentence's view, article, paragraph, sentence are 3
return
(
lod_
)[
level
].
size
()
-
1
;
* levels.
}
*/
size_t
NumLevels
()
const
{
return
lod_
.
size
();
}
private:
LoD
lod_
;
/*
};
* Number of elements in a level.
*/
/*
size_t
NumElements
(
size_t
level
=
0
)
const
{
* Expand the `source` to fit the LoD of `lod`. For example, a `source`
// PADDLE_ENFORCE_LT(level, NumLevels());
* LoDTensor is
// the last offset is the end of last element
* - LoD: [0, 2]
return
(
lod_
)[
level
].
size
()
-
1
;
* - tensor: [a0, a1]
}
* a `lod` is
* - LoD: [0 3 5]
private:
* returns a new LoDTensor
LoD
lod_
;
* - [a0 a0 a0 a1 a1]
};
*/
template
<
typename
T
>
/*
LoDTensor
LodExpand
(
const
LoDTensor
&
source
,
const
LoD
&
lod
,
size_t
level
)
{
* Expand the `source` to fit the LoD of `lod`. For example, a `source`
LoD
abs_lod
=
ToAbsOffset
(
lod
);
* LoDTensor is
const
auto
&
lod_level
=
lod
[
level
];
* - LoD: [0, 2]
size_t
num_instances
=
source
.
dims
()[
0
];
* - tensor: [a0, a1]
* a `lod` is
// new tensor
* - LoD: [0 3 5]
LoDTensor
tensor
;
* returns a new LoDTensor
tensor
.
set_lod
(
lod
);
* - [a0 a0 a0 a1 a1]
auto
dims
=
source
.
dims
();
*/
dims
[
0
]
=
lod_level
.
back
();
template
<
typename
T
>
tensor
.
Resize
(
dims
);
LoDTensor
LodExpand
(
const
LoDTensor
&
source
,
const
LoD
&
lod
,
tensor
.
mutable_data
<
T
>
();
size_t
level
)
{
LoD
abs_lod
=
ToAbsOffset
(
lod
);
// PADDLE_ENFORCE_EQ(num_instances, lod_level.size() - 1);
const
auto
&
lod_level
=
lod
[
level
];
for
(
size_t
ins
=
0
;
ins
<
num_instances
;
ins
++
)
{
size_t
num_instances
=
source
.
dims
()[
0
];
for
(
size_t
elem
=
lod_level
[
ins
];
elem
<
lod_level
[
ins
+
1
];
elem
++
)
{
auto
slice
=
tensor
.
Slice
(
elem
,
elem
+
1
);
// new tensor
TensorCopy
(
source
.
Slice
(
ins
,
ins
+
1
),
&
slice
);
LoDTensor
tensor
;
}
tensor
.
set_lod
(
lod
);
}
auto
dims
=
source
.
dims
();
return
tensor
;
dims
[
0
]
=
lod_level
.
back
();
}
tensor
.
Resize
(
dims
);
tensor
.
mutable_data
<
T
>
();
// Get the absolute offset of a lod[start_level][start_idx:end_idx] and
// relative length of details for every levels(i.e., [start_level: ]).
// PADDLE_ENFORCE_EQ(num_instances, lod_level.size() - 1);
//
for
(
size_t
ins
=
0
;
ins
<
num_instances
;
ins
++
)
{
// For example,
for
(
size_t
elem
=
lod_level
[
ins
];
elem
<
lod_level
[
ins
+
1
];
// lod = [[0, 3, 4, 8], [0, 9, 10, 11, 13, 17, 19, 22, 24]]
elem
++
)
{
// start_level = 0
auto
slice
=
tensor
.
Slice
(
elem
,
elem
+
1
);
// start_idx = 1
TensorCopy
(
source
.
Slice
(
ins
,
ins
+
1
),
&
slice
);
// end_idx = 3
}
//
}
// Returns:
return
tensor
;
// LoD = [[1, 4], [2, 4, 2, 3, 2]]
}
// pair<size_t, size_t> = {11, 24}
std
::
pair
<
LoD
,
std
::
pair
<
size_t
,
size_t
>>
// Get the absolute offset of a lod[start_level][start_idx:end_idx] and
GetSubLoDAndAbsoluteOffset
(
const
LoD
&
lod
,
size_t
start_idx
,
size_t
end_idx
,
// relative length of details for every levels(i.e., [start_level: ]).
size_t
start_level
);
//
// For example,
void
AppendLoD
(
LoD
*
lod
,
const
LoD
&
lod_length
);
// lod = [[0, 3, 4, 8], [0, 9, 10, 11, 13, 17, 19, 22, 24]]
// start_level = 0
/*
// start_idx = 1
* Serialize/Desiralize LoDTensor to std::ostream
// end_idx = 3
* You can pass ofstream or ostringstream to serilize to file
//
* or to a in memory string. GPU tensor will be copied to CPU.
// Returns:
*/
// LoD = [[1, 4], [2, 4, 2, 3, 2]]
void
SerializeToStream
(
std
::
ostream
&
os
,
const
LoDTensor
&
tensor
);
// pair<size_t, size_t> = {11, 24}
std
::
pair
<
LoD
,
std
::
pair
<
size_t
,
size_t
>>
void
DeserializeFromStream
(
std
::
istream
&
is
,
LoDTensor
*
tensor
);
GetSubLoDAndAbsoluteOffset
(
const
LoD
&
lod
,
size_t
start_idx
,
size_t
end_idx
,
size_t
start_level
);
}
// namespace framework
void
AppendLoD
(
LoD
*
lod
,
const
LoD
&
lod_length
);
/*
* Serialize/Desiralize LoDTensor to std::ostream
* You can pass ofstream or ostringstream to serilize to file
* or to a in memory string. GPU tensor will be copied to CPU.
*/
void
SerializeToStream
(
std
::
ostream
&
os
,
const
LoDTensor
&
tensor
);
void
DeserializeFromStream
(
std
::
istream
&
is
,
LoDTensor
*
tensor
);
}
// namespace framework
}
// namespace paddle_mobile
}
// namespace paddle_mobile
src/framework/op_desc.cpp
浏览文件 @
aef98218
...
@@ -5,55 +5,58 @@
...
@@ -5,55 +5,58 @@
#include "op_desc.h"
#include "op_desc.h"
namespace
paddle_mobile
{
namespace
paddle_mobile
{
namespace
framework
{
namespace
framework
{
OpDesc
::
OpDesc
(
const
proto
::
OpDesc
&
desc
)
:
desc_
(
desc
)
{
OpDesc
::
OpDesc
(
const
proto
::
OpDesc
&
desc
)
:
desc_
(
desc
)
{
for
(
int
i
=
0
;
i
<
desc_
.
inputs_size
();
++
i
)
{
for
(
int
i
=
0
;
i
<
desc_
.
inputs_size
();
++
i
)
{
const
proto
::
OpDesc
::
Var
&
var
=
desc_
.
inputs
(
i
);
const
proto
::
OpDesc
::
Var
&
var
=
desc_
.
inputs
(
i
);
std
::
vector
<
std
::
string
>
&
args
=
inputs_
[
var
.
parameter
()];
std
::
vector
<
std
::
string
>
&
args
=
inputs_
[
var
.
parameter
()];
int
arg_size
=
var
.
arguments_size
();
int
arg_size
=
var
.
arguments_size
();
for
(
int
j
=
0
;
j
<
arg_size
;
++
j
)
{
for
(
int
j
=
0
;
j
<
arg_size
;
++
j
)
{
args
.
push_back
(
var
.
arguments
(
j
));
args
.
push_back
(
var
.
arguments
(
j
));
}
}
}
}
for
(
int
i
=
0
;
i
<
desc_
.
outputs_size
();
++
i
)
{
for
(
int
i
=
0
;
i
<
desc_
.
outputs_size
();
++
i
)
{
const
proto
::
OpDesc
::
Var
&
var
=
desc_
.
outputs
(
i
);
const
proto
::
OpDesc
::
Var
&
var
=
desc_
.
outputs
(
i
);
std
::
vector
<
std
::
string
>
&
args
=
outputs_
[
var
.
parameter
()];
std
::
vector
<
std
::
string
>
&
args
=
outputs_
[
var
.
parameter
()];
int
arg_size
=
var
.
arguments_size
();
int
arg_size
=
var
.
arguments_size
();
for
(
int
j
=
0
;
j
<
arg_size
;
++
j
)
{
for
(
int
j
=
0
;
j
<
arg_size
;
++
j
)
{
args
.
push_back
(
var
.
arguments
(
j
));
args
.
push_back
(
var
.
arguments
(
j
));
}
}
}
}
for
(
const
proto
::
OpDesc
::
Attr
&
attr
:
desc_
.
attrs
())
{
for
(
const
proto
::
OpDesc
::
Attr
&
attr
:
desc_
.
attrs
())
{
std
::
string
attr_name
=
attr
.
name
();
std
::
string
attr_name
=
attr
.
name
();
if
(
attr
.
type
()
!=
proto
::
AttrType
::
BLOCK
)
{
if
(
attr
.
type
()
!=
proto
::
AttrType
::
BLOCK
)
{
attrs_
[
attr_name
]
=
Attribute
::
GetAttrValue
(
attr
);
attrs_
[
attr_name
]
=
Attribute
::
GetAttrValue
(
attr
);
// if (attr.type() == proto::AttrType::INT){
// if (attr.type() == proto::AttrType::INT){
// std::cout << " attrName " << attr_name << " " <<
// std::cout << " attrName " << attr_name << " " <<
// attrs_[attr_name].Get<int>() << std::endl;
// attrs_[attr_name].Get<int>() << std::endl;
// }
// }
}
}
}
}
}
}
const
std
::
vector
<
std
::
string
>
&
OpDesc
::
Input
(
const
std
::
string
&
name
)
const
{
const
std
::
vector
<
std
::
string
>
&
return
inputs_
.
find
(
name
)
->
second
;
OpDesc
::
Input
(
const
std
::
string
&
name
)
const
{
}
return
inputs_
.
find
(
name
)
->
second
;
}
const
std
::
vector
<
std
::
string
>
&
OpDesc
::
Output
(
const
std
::
string
&
name
)
const
{
return
outputs_
.
find
(
name
)
->
second
;
const
std
::
vector
<
std
::
string
>
&
}
OpDesc
::
Output
(
const
std
::
string
&
name
)
const
{
return
outputs_
.
find
(
name
)
->
second
;
Attribute
OpDesc
::
GetAttr
(
const
std
::
string
&
name
)
const
{
}
auto
it
=
attrs_
.
find
(
name
);
return
it
->
second
;
Attribute
OpDesc
::
GetAttr
(
const
std
::
string
&
name
)
const
{
}
auto
it
=
attrs_
.
find
(
name
);
return
it
->
second
;
const
std
::
unordered_map
<
std
::
string
,
Attribute
>
&
OpDesc
::
GetAttrMap
()
const
{
}
return
attrs_
;
}
const
std
::
unordered_map
<
std
::
string
,
Attribute
>
&
OpDesc
::
GetAttrMap
()
const
{
}
// namespace framework
return
attrs_
;
}
}
// namespace framework
}
// namespace paddle_mobile
}
// namespace paddle_mobile
src/framework/op_desc.h
浏览文件 @
aef98218
...
@@ -23,29 +23,31 @@ SOFTWARE.
...
@@ -23,29 +23,31 @@ SOFTWARE.
#include "paddle_mobile_object.h"
#include "paddle_mobile_object.h"
namespace
paddle_mobile
{
namespace
paddle_mobile
{
namespace
framework
{
namespace
framework
{
class
OpDesc
:
PaddleMobileObject
{
class
OpDesc
:
PaddleMobileObject
{
public:
public:
OpDesc
(
const
proto
::
OpDesc
&
desc
);
OpDesc
(
const
proto
::
OpDesc
&
desc
);
const
std
::
vector
<
std
::
string
>
&
Input
(
const
std
::
string
&
name
)
const
;
const
std
::
vector
<
std
::
string
>
&
const
std
::
vector
<
std
::
string
>
&
Output
(
const
std
::
string
&
name
)
const
;
Input
(
const
std
::
string
&
name
)
const
;
Attribute
GetAttr
(
const
std
::
string
&
name
)
const
;
const
std
::
vector
<
std
::
string
>
&
Output
(
const
std
::
string
&
name
)
const
;
Attribute
GetAttr
(
const
std
::
string
&
name
)
const
;
const
VariableNameMap
&
GetInputs
()
{
return
inputs_
;
}
const
VariableNameMap
&
GetInputs
()
{
return
inputs_
;
}
const
VariableNameMap
&
GetOutputs
()
{
return
outputs_
;
}
const
VariableNameMap
&
GetOutputs
()
{
return
outputs_
;
}
const
AttributeMap
&
GetAttrMap
()
const
;
const
AttributeMap
&
GetAttrMap
()
const
;
const
std
::
string
&
Type
()
{
return
desc_
.
type
();
};
const
std
::
string
&
Type
()
{
return
desc_
.
type
();
};
private:
private:
proto
::
OpDesc
desc_
;
proto
::
OpDesc
desc_
;
VariableNameMap
inputs_
;
VariableNameMap
inputs_
;
VariableNameMap
outputs_
;
VariableNameMap
outputs_
;
AttributeMap
attrs_
;
AttributeMap
attrs_
;
};
};
}
// namespace framework
}
// namespace framework
}
// namespace paddle_mobile
}
// namespace paddle_mobile
src/framework/op_info.h
浏览文件 @
aef98218
...
@@ -22,70 +22,74 @@ SOFTWARE.
...
@@ -22,70 +22,74 @@ SOFTWARE.
#include "framework.pb.h"
#include "framework.pb.h"
namespace
paddle_mobile
{
namespace
paddle_mobile
{
namespace
framework
{
namespace
framework
{
template
<
typename
Dtype
>
struct
OpInfo
{
template
<
typename
Dtype
>
struct
OpInfo
{
OpCreator
<
Dtype
>
creator_
;
OpCreator
<
Dtype
>
creator_
;
const
OpCreator
<
Dtype
>
&
Creator
()
const
{
const
OpCreator
<
Dtype
>
&
Creator
()
const
{
// PADDLE_ENFORCE_NOT_NULL(creator_,
// PADDLE_ENFORCE_NOT_NULL(creator_,
// "Operator Creator has not been registered");
// "Operator Creator has not been
return
creator_
;
// registered");
}
return
creator_
;
};
}
};
template
<
typename
Dtype
>
class
OpInfoMap
;
template
<
typename
Dtype
>
class
OpInfoMap
;
template
<
typename
Dtype
>
static
OpInfoMap
<
Dtype
>
*
g_op_info_map
=
nullptr
;
template
<
typename
Dtype
>
template
<
typename
Dtype
>
class
OpInfoMap
{
static
OpInfoMap
<
Dtype
>
*
g_op_info_map
=
nullptr
;
public:
static
OpInfoMap
&
Instance
()
{
template
<
typename
Dtype
>
class
OpInfoMap
{
if
(
g_op_info_map
<
Dtype
>
==
nullptr
)
{
public:
g_op_info_map
<
Dtype
>
=
new
OpInfoMap
();
static
OpInfoMap
&
Instance
()
{
}
if
(
g_op_info_map
<
Dtype
>
==
nullptr
)
{
return
*
g_op_info_map
<
Dtype
>
;
g_op_info_map
<
Dtype
>
=
new
OpInfoMap
();
};
}
return
*
g_op_info_map
<
Dtype
>
;
bool
Has
(
const
std
::
string
&
op_type
)
const
{
};
return
map_
.
find
(
op_type
)
!=
map_
.
end
();
}
bool
Has
(
const
std
::
string
&
op_type
)
const
{
return
map_
.
find
(
op_type
)
!=
map_
.
end
();
void
Insert
(
const
std
::
string
&
type
,
const
OpInfo
<
Dtype
>
&
info
)
{
}
// PADDLE_ENFORCE(!Has(type), "Operator %s has been registered", type);
map_
.
insert
({
type
,
info
});
void
Insert
(
const
std
::
string
&
type
,
const
OpInfo
<
Dtype
>
&
info
)
{
}
// PADDLE_ENFORCE(!Has(type), "Operator %s has been
// registered", type);
const
OpInfo
<
Dtype
>
&
Get
(
const
std
::
string
&
type
)
const
{
map_
.
insert
({
type
,
info
});
auto
op_info_ptr
=
GetNullable
(
type
);
}
// PADDLE_ENFORCE_NOT_NULL(op_info_ptr, "Operator %s has not been
// registered",
const
OpInfo
<
Dtype
>
&
Get
(
const
std
::
string
&
type
)
const
{
// type);
auto
op_info_ptr
=
GetNullable
(
type
);
return
*
op_info_ptr
;
// PADDLE_ENFORCE_NOT_NULL(op_info_ptr, "Operator %s has not
}
// been
// registered",
const
OpInfo
<
Dtype
>
*
GetNullable
(
const
std
::
string
&
type
)
const
{
// type);
auto
it
=
map_
.
find
(
type
);
return
*
op_info_ptr
;
if
(
it
==
map_
.
end
())
{
}
return
nullptr
;
}
else
{
const
OpInfo
<
Dtype
>
*
GetNullable
(
const
std
::
string
&
type
)
const
{
return
&
it
->
second
;
auto
it
=
map_
.
find
(
type
);
}
if
(
it
==
map_
.
end
())
{
}
return
nullptr
;
}
else
{
const
std
::
unordered_map
<
std
::
string
,
OpInfo
<
Dtype
>>
&
map
()
const
{
return
&
it
->
second
;
return
map_
;
}
}
}
std
::
unordered_map
<
std
::
string
,
OpInfo
<
Dtype
>>
*
mutable_map
()
{
const
std
::
unordered_map
<
std
::
string
,
OpInfo
<
Dtype
>>
&
map
()
const
{
return
&
map_
;
return
map_
;
}
}
private:
std
::
unordered_map
<
std
::
string
,
OpInfo
<
Dtype
>>
*
mutable_map
()
{
OpInfoMap
()
=
default
;
return
&
map_
;
std
::
unordered_map
<
std
::
string
,
OpInfo
<
Dtype
>>
map_
;
}
// DISABLE_COPY_AND_ASSIGN(OpInfoMap);
private:
};
OpInfoMap
()
=
default
;
std
::
unordered_map
<
std
::
string
,
OpInfo
<
Dtype
>>
map_
;
}
// namespace framework
// DISABLE_COPY_AND_ASSIGN(OpInfoMap);
};
}
// namespace framework
}
// namespace paddle_mobile
}
// namespace paddle_mobile
src/framework/op_kernel_type.h
浏览文件 @
aef98218
...
@@ -22,43 +22,51 @@ SOFTWARE.
...
@@ -22,43 +22,51 @@ SOFTWARE.
#include "framework.pb.h"
#include "framework.pb.h"
namespace
paddle_mobile
{
namespace
paddle_mobile
{
namespace
framework
{
namespace
framework
{
struct
OpKernelType
{
struct
OpKernelType
{
struct
Hash
{
struct
Hash
{
size_t
operator
()(
const
OpKernelType
&
key
)
const
{
size_t
operator
()(
const
OpKernelType
&
key
)
const
{
int
data_type
=
static_cast
<
int
>
(
key
.
data_type_
)
<<
LEFT_SHIFT
;
int
data_type
=
static_cast
<
int
>
(
key
.
data_type_
)
int
data_layout
=
static_cast
<
int
>
(
key
.
data_layout_
)
<<
(
LEFT_SHIFT
*
2
);
<<
LEFT_SHIFT
;
int
data_layout
=
static_cast
<
int
>
(
key
.
data_layout_
)
<<
(
LEFT_SHIFT
*
2
);
std
::
hash
<
int
>
hasher
;
std
::
hash
<
int
>
hasher
;
return
hasher
(
data_type
+
data_layout
);
return
hasher
(
data_type
+
data_layout
);
}
}
};
};
// place, data_type, library_type kinds less than 2^8
// place, data_type, library_type kinds less than 2^8
constexpr
static
int
LEFT_SHIFT
=
8
;
constexpr
static
int
LEFT_SHIFT
=
8
;
proto
::
VarType
::
Type
data_type_
;
proto
::
VarType
::
Type
data_type_
;
DataLayout
data_layout_
;
DataLayout
data_layout_
;
OpKernelType
(
proto
::
VarType
::
Type
data_type
,
OpKernelType
(
proto
::
VarType
::
Type
data_type
,
DataLayout
data_layout
=
DataLayout
::
kAnyLayout
)
DataLayout
data_layout
=
DataLayout
::
kAnyLayout
)
:
data_type_
(
data_type
),
data_layout_
(
data_layout
)
{}
:
data_type_
(
data_type
),
data_layout_
(
data_layout
)
{}
bool
operator
==
(
const
OpKernelType
&
o
)
const
{
bool
operator
==
(
const
OpKernelType
&
o
)
const
{
return
data_type_
==
o
.
data_type_
&&
data_layout_
==
o
.
data_layout_
;
return
data_type_
==
o
.
data_type_
&&
}
data_layout_
==
o
.
data_layout_
;
}
bool
operator
!=
(
const
OpKernelType
&
o
)
const
{
return
!
(
*
this
==
o
);
}
bool
operator
!=
(
const
OpKernelType
&
o
)
const
{
};
return
!
(
*
this
==
o
);
}
};
inline
bool
NeedTransformLayout
(
const
DataLayout
&
l
,
const
DataLayout
&
r
)
{
inline
bool
NeedTransformLayout
(
const
DataLayout
&
l
,
return
l
!=
DataLayout
::
kAnyLayout
&&
r
!=
DataLayout
::
kAnyLayout
&&
l
!=
r
;
const
DataLayout
&
r
)
{
}
return
l
!=
DataLayout
::
kAnyLayout
&&
r
!=
DataLayout
::
kAnyLayout
&&
l
!=
r
;
}
inline
bool
TransFromNeeded
(
const
OpKernelType
&
l
,
const
OpKernelType
&
r
)
{
inline
bool
TransFromNeeded
(
const
OpKernelType
&
l
,
return
(
l
.
data_type_
!=
r
.
data_type_
)
||
const
OpKernelType
&
r
)
{
NeedTransformLayout
(
l
.
data_layout_
,
r
.
data_layout_
);
return
(
l
.
data_type_
!=
r
.
data_type_
)
||
}
NeedTransformLayout
(
l
.
data_layout_
,
r
.
data_layout_
);
}
}
// namespace framework
}
// namespace framework
}
// namespace paddle_mobile
}
// namespace paddle_mobile
src/framework/op_proto_maker.h
浏览文件 @
aef98218
...
@@ -19,8 +19,8 @@ SOFTWARE.
...
@@ -19,8 +19,8 @@ SOFTWARE.
#pragma once
#pragma once
namespace
paddle_mobile
{
namespace
paddle_mobile
{
namespace
framework
{
namespace
framework
{
// this class not only make proto but also init attribute checkers.
// this class not only make proto but also init attribute checkers.
class
OpProtoAndCheckerMaker
{};
class
OpProtoAndCheckerMaker
{};
}
// namespace framework
}
// namespace framework
}
// namespace paddle_mobile
}
// namespace paddle_mobile
src/framework/operator.cpp
浏览文件 @
aef98218
...
@@ -20,26 +20,26 @@ SOFTWARE.
...
@@ -20,26 +20,26 @@ SOFTWARE.
#include "op_info.h"
#include "op_info.h"
namespace
paddle_mobile
{
namespace
paddle_mobile
{
namespace
framework
{
namespace
framework
{
template
<
typename
Dtype
>
template
<
typename
Dtype
>
OperatorBase
<
Dtype
>::
OperatorBase
(
const
std
::
string
&
type
,
OperatorBase
<
Dtype
>::
OperatorBase
(
const
std
::
string
&
type
,
const
VariableNameMap
&
inputs
,
const
VariableNameMap
&
inputs
,
const
VariableNameMap
&
outputs
,
const
VariableNameMap
&
outputs
,
const
AttributeMap
&
attrs
,
const
AttributeMap
&
attrs
,
std
::
shared_ptr
<
Scope
>
scope
)
std
::
shared_ptr
<
Scope
>
scope
)
:
type_
(
type
),
inputs_
(
inputs
),
outputs_
(
outputs
),
attrs_
(
attrs
),
:
type_
(
type
),
inputs_
(
inputs
),
outputs_
(
outputs
),
attrs_
(
attrs
),
scope_
(
scope
)
{
scope_
(
scope
)
{
CheckAllInputOutputSet
();
CheckAllInputOutputSet
();
}
}
template
<
typename
Dtype
>
void
OperatorBase
<
Dtype
>::
Run
()
{
RunImpl
();
}
template
<
typename
Dtype
>
void
OperatorBase
<
Dtype
>::
Run
()
{
RunImpl
();
}
template
<
typename
Dtype
>
template
<
typename
Dtype
>
void
OperatorBase
<
Dtype
>::
CheckAllInputOutputSet
()
const
{}
void
OperatorBase
<
Dtype
>::
CheckAllInputOutputSet
()
const
{}
template
class
OperatorBase
<
CPU
>;
template
class
OperatorBase
<
CPU
>;
template
class
OperatorWithKernel
<
CPU
>;
template
class
OperatorWithKernel
<
CPU
>;
}
// namespace framework
}
// namespace framework
}
// namespace paddle_mobile
}
// namespace paddle_mobile
src/framework/operator.h
浏览文件 @
aef98218
...
@@ -33,53 +33,57 @@ SOFTWARE.
...
@@ -33,53 +33,57 @@ SOFTWARE.
#include "variable.h"
#include "variable.h"
namespace
paddle_mobile
{
namespace
paddle_mobile
{
namespace
framework
{
namespace
framework
{
template
<
typename
Dtype
>
class
OperatorBase
:
PaddleMobileObject
{
template
<
typename
Dtype
>
class
OperatorBase
:
PaddleMobileObject
{
public:
public:
OperatorBase
(
const
std
::
string
&
type
,
const
VariableNameMap
&
inputs
,
OperatorBase
(
const
std
::
string
&
type
,
const
VariableNameMap
&
inputs
,
const
VariableNameMap
&
outputs
,
const
AttributeMap
&
attrs
,
const
VariableNameMap
&
outputs
,
std
::
shared_ptr
<
Scope
>
scope
);
const
AttributeMap
&
attrs
,
virtual
~
OperatorBase
()
{}
std
::
shared_ptr
<
Scope
>
scope
);
virtual
void
Run
();
virtual
~
OperatorBase
()
{}
const
VariableNameMap
&
Inputs
()
const
{
return
inputs_
;
}
virtual
void
Run
();
const
VariableNameMap
&
Outputs
()
const
{
return
outputs_
;
}
const
VariableNameMap
&
Inputs
()
const
{
return
inputs_
;
}
const
std
::
string
&
Type
()
const
{
return
type_
;
}
const
VariableNameMap
&
Outputs
()
const
{
return
outputs_
;
}
const
AttributeMap
&
Attrs
()
const
{
return
attrs_
;
}
const
std
::
string
&
Type
()
const
{
return
type_
;
}
const
AttributeMap
&
Attrs
()
const
{
return
attrs_
;
}
protected:
protected:
std
::
shared_ptr
<
Scope
>
scope_
;
std
::
shared_ptr
<
Scope
>
scope_
;
std
::
string
type_
;
std
::
string
type_
;
VariableNameMap
inputs_
;
VariableNameMap
inputs_
;
VariableNameMap
outputs_
;
VariableNameMap
outputs_
;
AttributeMap
attrs_
;
AttributeMap
attrs_
;
private:
private:
void
CheckAllInputOutputSet
()
const
;
void
CheckAllInputOutputSet
()
const
;
virtual
void
RunImpl
()
const
=
0
;
virtual
void
RunImpl
()
const
=
0
;
};
};
template
<
typename
Dtype
>
template
<
typename
Dtype
>
class
OperatorWithKernel
:
public
OperatorBase
<
Dtype
>
{
class
OperatorWithKernel
:
public
OperatorBase
<
Dtype
>
{
public:
public:
OperatorWithKernel
(
const
std
::
string
&
type
,
const
VariableNameMap
&
inputs
,
OperatorWithKernel
(
const
std
::
string
&
type
,
const
VariableNameMap
&
outputs
,
const
AttributeMap
&
attrs
,
const
VariableNameMap
&
inputs
,
std
::
shared_ptr
<
Scope
>
scope
)
const
VariableNameMap
&
outputs
,
:
OperatorBase
<
Dtype
>
(
type
,
inputs
,
outputs
,
attrs
,
scope
)
{}
const
AttributeMap
&
attrs
,
virtual
void
InferShape
()
const
=
0
;
std
::
shared_ptr
<
Scope
>
scope
)
:
OperatorBase
<
Dtype
>
(
type
,
inputs
,
outputs
,
attrs
,
scope
)
{}
virtual
void
InferShape
()
const
=
0
;
protected:
protected:
virtual
void
RunImpl
()
const
=
0
;
virtual
void
RunImpl
()
const
=
0
;
private:
private:
};
};
template
<
typename
Dtype
,
typename
P
>
class
OpKernelBase
:
PaddleMobileObject
{
template
<
typename
Dtype
,
typename
P
>
public:
class
OpKernelBase
:
PaddleMobileObject
{
virtual
void
Compute
(
const
P
&
para
)
const
=
0
;
public:
virtual
void
Compute
(
const
P
&
para
)
const
=
0
;
virtual
~
OpKernelBase
()
=
default
;
virtual
~
OpKernelBase
()
=
default
;
};
};
}
// namespace framework
}
// namespace framework
}
// namespace paddle_mobile
}
// namespace paddle_mobile
src/framework/paddle_mobile_object.h
浏览文件 @
aef98218
...
@@ -23,14 +23,14 @@ SOFTWARE.
...
@@ -23,14 +23,14 @@ SOFTWARE.
namespace
paddle_mobile
{
namespace
paddle_mobile
{
class
PaddleMobileObject
{
class
PaddleMobileObject
{
public:
public:
virtual
inline
const
std
::
string
&
ToString
()
{
virtual
inline
const
std
::
string
&
ToString
()
{
char
address
[
128
]
=
{
0
};
char
address
[
128
]
=
{
0
};
sprintf
(
address
,
"%p"
,
this
);
sprintf
(
address
,
"%p"
,
this
);
return
std
::
string
(
address
);
return
std
::
string
(
address
);
}
}
private:
private:
};
};
}
// namespace paddle_mobile
}
// namespace paddle_mobile
src/framework/program.cpp
浏览文件 @
aef98218
...
@@ -17,5 +17,5 @@ SOFTWARE.
...
@@ -17,5 +17,5 @@ SOFTWARE.
==============================================================================*/
==============================================================================*/
namespace
paddle_mobile
{
namespace
paddle_mobile
{
namespace
framework
{}
namespace
framework
{}
}
// namespace paddle_mobile
}
// namespace paddle_mobile
src/framework/program.h
浏览文件 @
aef98218
...
@@ -24,17 +24,17 @@ SOFTWARE.
...
@@ -24,17 +24,17 @@ SOFTWARE.
#include "scope.h"
#include "scope.h"
namespace
paddle_mobile
{
namespace
paddle_mobile
{
namespace
framework
{
namespace
framework
{
template
<
typename
Dtype
,
Precision
P
=
Precision
::
FP32
>
template
<
typename
Dtype
,
Precision
P
=
Precision
::
FP32
>
class
Program
:
PaddleMobileObject
{
class
Program
:
PaddleMobileObject
{
public:
public:
std
::
shared_ptr
<
ProgramDesc
>
originProgram
;
std
::
shared_ptr
<
ProgramDesc
>
originProgram
;
std
::
shared_ptr
<
ProgramDesc
>
optimizeProgram
;
std
::
shared_ptr
<
ProgramDesc
>
optimizeProgram
;
std
::
shared_ptr
<
Scope
>
scope
;
std
::
shared_ptr
<
Scope
>
scope
;
private:
private:
};
};
}
// namespace framework
}
// namespace framework
}
// namespace paddle_mobile
}
// namespace paddle_mobile
src/framework/program_desc.cpp
浏览文件 @
aef98218
...
@@ -5,18 +5,18 @@
...
@@ -5,18 +5,18 @@
#include "program_desc.h"
#include "program_desc.h"
namespace
paddle_mobile
{
namespace
paddle_mobile
{
namespace
framework
{
namespace
framework
{
ProgramDesc
::
ProgramDesc
(
const
proto
::
ProgramDesc
&
desc
)
:
desc_
(
desc
)
{
ProgramDesc
::
ProgramDesc
(
const
proto
::
ProgramDesc
&
desc
)
:
desc_
(
desc
)
{
for
(
auto
&
block_desc
:
*
desc_
.
mutable_blocks
())
{
for
(
auto
&
block_desc
:
*
desc_
.
mutable_blocks
())
{
// new framework::BlockDesc(block_desc)
// new framework::BlockDesc(block_desc)
blocks_
.
emplace_back
(
std
::
make_shared
<
BlockDesc
>
(
block_desc
));
blocks_
.
emplace_back
(
std
::
make_shared
<
BlockDesc
>
(
block_desc
));
}
}
}
}
std
::
shared_ptr
<
BlockDesc
>
ProgramDesc
::
Block
(
size_t
idx
)
{
std
::
shared_ptr
<
BlockDesc
>
ProgramDesc
::
Block
(
size_t
idx
)
{
return
blocks_
[
idx
];
return
blocks_
[
idx
];
}
}
}
// namespace framework
}
// namespace framework
}
// namespace paddle_mobile
}
// namespace paddle_mobile
src/framework/program_desc.h
浏览文件 @
aef98218
...
@@ -25,18 +25,20 @@ SOFTWARE.
...
@@ -25,18 +25,20 @@ SOFTWARE.
#include "paddle_mobile_object.h"
#include "paddle_mobile_object.h"
namespace
paddle_mobile
{
namespace
paddle_mobile
{
namespace
framework
{
namespace
framework
{
class
ProgramDesc
:
PaddleMobileObject
{
class
ProgramDesc
:
PaddleMobileObject
{
public:
public:
ProgramDesc
(
const
proto
::
ProgramDesc
&
desc
);
ProgramDesc
(
const
proto
::
ProgramDesc
&
desc
);
std
::
shared_ptr
<
BlockDesc
>
Block
(
size_t
idx
);
std
::
shared_ptr
<
BlockDesc
>
Block
(
size_t
idx
);
const
std
::
vector
<
std
::
shared_ptr
<
BlockDesc
>>
&
Blocks
()
{
return
blocks_
;
};
const
std
::
vector
<
std
::
shared_ptr
<
BlockDesc
>>
&
Blocks
()
{
return
blocks_
;
};
private:
private:
std
::
vector
<
std
::
shared_ptr
<
BlockDesc
>>
blocks_
;
std
::
vector
<
std
::
shared_ptr
<
BlockDesc
>>
blocks_
;
proto
::
ProgramDesc
desc_
;
proto
::
ProgramDesc
desc_
;
};
};
}
// namespace framework
}
// namespace framework
}
// namespace paddle_mobile
}
// namespace paddle_mobile
src/framework/scope.cc
浏览文件 @
aef98218
...
@@ -4,113 +4,116 @@
...
@@ -4,113 +4,116 @@
#include <vector>
#include <vector>
namespace
paddle_mobile
{
namespace
paddle_mobile
{
namespace
framework
{
namespace
framework
{
Scope
&
Scope
::
NewScope
()
const
{
Scope
&
Scope
::
NewScope
()
const
{
std
::
unique_lock
<
std
::
mutex
>
lock
(
mutex_
);
std
::
unique_lock
<
std
::
mutex
>
lock
(
mutex_
);
kids_
.
push_back
(
new
Scope
(
this
));
kids_
.
push_back
(
new
Scope
(
this
));
return
*
kids_
.
back
();
return
*
kids_
.
back
();
}
}
Variable
*
Scope
::
Var
(
const
std
::
string
&
name
)
{
Variable
*
Scope
::
Var
(
const
std
::
string
&
name
)
{
auto
*
pvar
=
FindVarLocally
(
name
);
auto
*
pvar
=
FindVarLocally
(
name
);
if
(
pvar
!=
nullptr
)
{
if
(
pvar
!=
nullptr
)
{
return
pvar
;
return
pvar
;
};
};
pvar
=
new
Variable
;
pvar
=
new
Variable
;
vars_
[
name
]
=
pvar
;
vars_
[
name
]
=
pvar
;
pvar
->
name_
=
&
(
vars_
.
find
(
name
)
->
first
);
pvar
->
name_
=
&
(
vars_
.
find
(
name
)
->
first
);
return
pvar
;
return
pvar
;
}
}
// Variable* Scope::Var(std::string* name) {
// Variable* Scope::Var(std::string* name) {
// auto var_name = string::Sprintf("%p.%d", this, vars_.size());
// auto var_name = string::Sprintf("%p.%d", this,
// if (name != nullptr) {
// vars_.size());
// *name = var_name;
// if (name != nullptr) {
// }
// *name = var_name;
// return Var(var_name);
// }
// }
// return Var(var_name);
// }
Variable
*
Scope
::
FindVar
(
const
std
::
string
&
name
)
const
{
Variable
*
Scope
::
FindVar
(
const
std
::
string
&
name
)
const
{
auto
*
pvar
=
FindVarLocally
(
name
);
auto
*
pvar
=
FindVarLocally
(
name
);
if
(
pvar
!=
nullptr
)
{
if
(
pvar
!=
nullptr
)
{
return
pvar
;
return
pvar
;
}
}
return
(
parent_
==
nullptr
)
?
nullptr
:
parent_
->
FindVar
(
name
);
return
(
parent_
==
nullptr
)
?
nullptr
:
parent_
->
FindVar
(
name
);
}
}
const
Scope
*
Scope
::
FindScope
(
const
Variable
*
var
)
const
{
const
Scope
*
Scope
::
FindScope
(
const
Variable
*
var
)
const
{
for
(
auto
&
name_var
:
vars_
)
{
for
(
auto
&
name_var
:
vars_
)
{
if
(
name_var
.
second
==
var
)
{
if
(
name_var
.
second
==
var
)
{
return
this
;
return
this
;
}
}
}
}
return
(
parent_
==
nullptr
)
?
nullptr
:
parent_
->
FindScope
(
var
);
return
(
parent_
==
nullptr
)
?
nullptr
:
parent_
->
FindScope
(
var
);
}
}
void
Scope
::
DropKids
()
{
void
Scope
::
DropKids
()
{
for
(
Scope
*
s
:
kids_
)
{
for
(
Scope
*
s
:
kids_
)
{
delete
s
;
delete
s
;
}
}
kids_
.
clear
();
kids_
.
clear
();
}
}
std
::
vector
<
std
::
string
>
Scope
::
LocalVarNames
()
const
{
std
::
vector
<
std
::
string
>
Scope
::
LocalVarNames
()
const
{
std
::
vector
<
std
::
string
>
known_vars
;
std
::
vector
<
std
::
string
>
known_vars
;
known_vars
.
reserve
(
vars_
.
size
());
known_vars
.
reserve
(
vars_
.
size
());
for
(
auto
&
name_var
:
vars_
)
{
for
(
auto
&
name_var
:
vars_
)
{
known_vars
.
emplace_back
(
name_var
.
first
);
known_vars
.
emplace_back
(
name_var
.
first
);
}
}
return
known_vars
;
return
known_vars
;
}
}
void
Scope
::
DeleteScope
(
Scope
*
scope
)
const
{
void
Scope
::
DeleteScope
(
Scope
*
scope
)
const
{
std
::
unique_lock
<
std
::
mutex
>
lock
(
mutex_
);
std
::
unique_lock
<
std
::
mutex
>
lock
(
mutex_
);
auto
it
=
std
::
find
(
kids_
.
begin
(),
kids_
.
end
(),
scope
);
auto
it
=
std
::
find
(
kids_
.
begin
(),
kids_
.
end
(),
scope
);
kids_
.
erase
(
it
);
kids_
.
erase
(
it
);
delete
scope
;
delete
scope
;
// deferent
// deferent
}
}
void
Scope
::
EraseVars
(
const
std
::
vector
<
std
::
string
>
&
var_names
)
{
void
Scope
::
EraseVars
(
const
std
::
vector
<
std
::
string
>
&
var_names
)
{
std
::
set
<
std
::
string
>
var_set
(
var_names
.
begin
(),
var_names
.
end
());
std
::
set
<
std
::
string
>
var_set
(
var_names
.
begin
(),
var_names
.
end
());
for
(
auto
it
=
vars_
.
begin
();
it
!=
vars_
.
end
();)
{
for
(
auto
it
=
vars_
.
begin
();
it
!=
vars_
.
end
();)
{
if
(
var_set
.
find
(
it
->
first
)
!=
var_set
.
end
())
{
if
(
var_set
.
find
(
it
->
first
)
!=
var_set
.
end
())
{
delete
it
->
second
;
delete
it
->
second
;
it
=
vars_
.
erase
(
it
);
it
=
vars_
.
erase
(
it
);
}
else
{
}
else
{
++
it
;
++
it
;
}
}
}
}
}
}
void
Scope
::
Rename
(
const
std
::
string
&
origin_name
,
void
Scope
::
Rename
(
const
std
::
string
&
origin_name
,
const
std
::
string
&
new_name
)
const
{
const
std
::
string
&
new_name
)
const
{
auto
origin_it
=
vars_
.
find
(
origin_name
);
auto
origin_it
=
vars_
.
find
(
origin_name
);
if
(
origin_it
==
vars_
.
end
())
{
if
(
origin_it
==
vars_
.
end
())
{
return
;
return
;
}
}
auto
new_it
=
vars_
.
find
(
new_name
);
auto
new_it
=
vars_
.
find
(
new_name
);
if
(
new_it
!=
vars_
.
end
())
{
if
(
new_it
!=
vars_
.
end
())
{
return
;
return
;
}
}
vars_
[
new_name
]
=
origin_it
->
second
;
vars_
[
new_name
]
=
origin_it
->
second
;
vars_
.
erase
(
origin_it
);
vars_
.
erase
(
origin_it
);
}
}
//
//
// std::string Scope::Rename(const std::string& origin_name) const {
// std::string Scope::Rename(const std::string& origin_name)
// auto var_name = string::Sprintf("%p.%d", this, vars_.size());
// const {
// Rename(origin_name, var_name);
// auto var_name = string::Sprintf("%p.%d", this,
// return var_name;
// vars_.size());
// }
// Rename(origin_name, var_name);
// return var_name;
// }
Variable
*
Scope
::
FindVarLocally
(
const
std
::
string
&
name
)
const
{
Variable
*
Scope
::
FindVarLocally
(
const
std
::
string
&
name
)
const
{
auto
it
=
vars_
.
find
(
name
);
auto
it
=
vars_
.
find
(
name
);
if
(
it
!=
vars_
.
end
())
{
if
(
it
!=
vars_
.
end
())
{
return
it
->
second
;
return
it
->
second
;
}
}
return
nullptr
;
return
nullptr
;
}
}
}
// namespace framework
}
// namespace framework
}
// namespace paddle_mobile
}
// namespace paddle_mobile
src/framework/scope.h
浏览文件 @
aef98218
...
@@ -24,57 +24,58 @@ SOFTWARE.
...
@@ -24,57 +24,58 @@ SOFTWARE.
#include <unordered_map> //std::unordered_map
#include <unordered_map> //std::unordered_map
namespace
paddle_mobile
{
namespace
paddle_mobile
{
namespace
framework
{
namespace
framework
{
class
Scope
{
class
Scope
{
public:
public:
Scope
()
{}
Scope
()
{}
~
Scope
()
{}
~
Scope
()
{}
Scope
&
NewScope
()
const
;
Scope
&
NewScope
()
const
;
/// Create a variable with given name if it doesn't exist.
/// Create a variable with given name if it doesn't exist.
Variable
*
Var
(
const
std
::
string
&
name
);
Variable
*
Var
(
const
std
::
string
&
name
);
/// Create a variable with a scope-unique name.
/// Create a variable with a scope-unique name.
Variable
*
Var
(
std
::
string
*
name
=
nullptr
);
Variable
*
Var
(
std
::
string
*
name
=
nullptr
);
void
EraseVars
(
const
std
::
vector
<
std
::
string
>
&
var_names
);
void
EraseVars
(
const
std
::
vector
<
std
::
string
>
&
var_names
);
/// Find a variable in the scope or any of its ancestors. Returns
/// Find a variable in the scope or any of its ancestors. Returns
/// nullptr if cannot find.
/// nullptr if cannot find.
Variable
*
FindVar
(
const
std
::
string
&
name
)
const
;
Variable
*
FindVar
(
const
std
::
string
&
name
)
const
;
const
Scope
*
parent
()
const
{
return
parent_
;
}
const
Scope
*
parent
()
const
{
return
parent_
;
}
/// Find the scope or an ancestor scope that contains the given variable.
/// Find the scope or an ancestor scope that contains the given
const
Scope
*
FindScope
(
const
Variable
*
var
)
const
;
/// variable.
const
Scope
*
FindScope
(
const
Variable
*
var
)
const
;
void
DeleteScope
(
Scope
*
scope
)
const
;
void
DeleteScope
(
Scope
*
scope
)
const
;
/// Drop all kids scopes belonged to this scope.
/// Drop all kids scopes belonged to this scope.
void
DropKids
();
void
DropKids
();
// enumerate all the variables current contains.
// enumerate all the variables current contains.
std
::
vector
<
std
::
string
>
LocalVarNames
()
const
;
std
::
vector
<
std
::
string
>
LocalVarNames
()
const
;
// Rename variable to a new name
// Rename variable to a new name
void
Rename
(
const
std
::
string
&
origin_name
,
void
Rename
(
const
std
::
string
&
origin_name
,
const
std
::
string
&
new_name
)
const
;
const
std
::
string
&
new_name
)
const
;
// Rename variable to a new name and return the new name
// Rename variable to a new name and return the new name
std
::
string
Rename
(
const
std
::
string
&
origin_name
)
const
;
std
::
string
Rename
(
const
std
::
string
&
origin_name
)
const
;
Variable
*
FindVarLocally
(
const
std
::
string
&
name
)
const
;
Variable
*
FindVarLocally
(
const
std
::
string
&
name
)
const
;
private:
private:
// Call Scope::NewScope for a sub-scope.
// Call Scope::NewScope for a sub-scope.
explicit
Scope
(
Scope
const
*
parent
)
:
parent_
(
parent
)
{}
explicit
Scope
(
Scope
const
*
parent
)
:
parent_
(
parent
)
{}
mutable
std
::
unordered_map
<
std
::
string
,
Variable
*>
vars_
;
mutable
std
::
unordered_map
<
std
::
string
,
Variable
*>
vars_
;
mutable
std
::
list
<
Scope
*>
kids_
;
mutable
std
::
list
<
Scope
*>
kids_
;
Scope
const
*
parent_
{
nullptr
};
Scope
const
*
parent_
{
nullptr
};
mutable
std
::
mutex
mutex_
;
mutable
std
::
mutex
mutex_
;
};
};
}
// namespace framework
}
// namespace framework
}
// namespace paddle_mobile
}
// namespace paddle_mobile
src/framework/selected_rows.h
浏览文件 @
aef98218
...
@@ -24,57 +24,59 @@ SOFTWARE.
...
@@ -24,57 +24,59 @@ SOFTWARE.
#include "tensor.h"
#include "tensor.h"
namespace
paddle_mobile
{
namespace
paddle_mobile
{
namespace
framework
{
namespace
framework
{
class
SelectedRows
{
class
SelectedRows
{
public:
public:
SelectedRows
(
const
std
::
vector
<
int64_t
>
&
rows
,
const
int64_t
&
height
)
SelectedRows
(
const
std
::
vector
<
int64_t
>
&
rows
,
:
rows_
(
rows
),
height_
(
height
)
{
const
int64_t
&
height
)
value_
.
reset
(
new
Tensor
());
:
rows_
(
rows
),
height_
(
height
)
{
}
value_
.
reset
(
new
Tensor
());
}
SelectedRows
()
{
SelectedRows
()
{
height_
=
0
;
height_
=
0
;
value_
.
reset
(
new
Tensor
());
value_
.
reset
(
new
Tensor
());
}
}
const
Tensor
&
value
()
const
{
return
*
value_
;
}
const
Tensor
&
value
()
const
{
return
*
value_
;
}
Tensor
*
mutable_value
()
{
return
value_
.
get
();
}
Tensor
*
mutable_value
()
{
return
value_
.
get
();
}
int64_t
height
()
const
{
return
height_
;
}
int64_t
height
()
const
{
return
height_
;
}
void
set_height
(
int64_t
height
)
{
height_
=
height
;
}
void
set_height
(
int64_t
height
)
{
height_
=
height
;
}
const
std
::
vector
<
int64_t
>
&
rows
()
const
{
return
rows_
;
}
const
std
::
vector
<
int64_t
>
&
rows
()
const
{
return
rows_
;
}
std
::
vector
<
int64_t
>
*
mutable_rows
()
{
return
&
rows_
;
}
std
::
vector
<
int64_t
>
*
mutable_rows
()
{
return
&
rows_
;
}
void
set_rows
(
const
std
::
vector
<
int64_t
>
&
rows
)
{
rows_
=
rows
;
}
void
set_rows
(
const
std
::
vector
<
int64_t
>
&
rows
)
{
rows_
=
rows
;
}
/**
/**
* get the index of id in rows
* get the index of id in rows
*/
*/
int64_t
index
(
int64_t
id
)
const
{
int64_t
index
(
int64_t
id
)
const
{
auto
it
=
std
::
find
(
rows_
.
begin
(),
rows_
.
end
(),
id
);
auto
it
=
std
::
find
(
rows_
.
begin
(),
rows_
.
end
(),
id
);
// PADDLE_ENFORCE(it != rows_.end(), "id should be in rows");
// PADDLE_ENFORCE(it != rows_.end(), "id should be in rows");
return
static_cast
<
int64_t
>
(
std
::
distance
(
rows_
.
begin
(),
it
));
return
static_cast
<
int64_t
>
(
std
::
distance
(
rows_
.
begin
(),
it
));
}
}
DDim
GetCompleteDims
()
const
{
DDim
GetCompleteDims
()
const
{
std
::
vector
<
int64_t
>
dims
=
vectorize
(
value_
->
dims
());
std
::
vector
<
int64_t
>
dims
=
vectorize
(
value_
->
dims
());
dims
[
0
]
=
height_
;
dims
[
0
]
=
height_
;
return
make_ddim
(
dims
);
return
make_ddim
(
dims
);
}
}
private:
private:
// Notice: rows can be duplicate. We can have {0, 4, 7, 0, 5, 7, 9} here.
// Notice: rows can be duplicate. We can have {0, 4, 7, 0, 5, 7, 9}
// SelectedRows are simply concated when adding together. Until a
// here.
// SelectedRows add a Tensor, will the duplicate rows be handled.
// SelectedRows are simply concated when adding together. Until a
std
::
vector
<
int64_t
>
rows_
;
// SelectedRows add a Tensor, will the duplicate rows be handled.
std
::
unique_ptr
<
Tensor
>
value_
{
nullptr
};
std
::
vector
<
int64_t
>
rows_
;
int64_t
height_
;
std
::
unique_ptr
<
Tensor
>
value_
{
nullptr
};
};
int64_t
height_
;
};
}
// namespace framework
}
// namespace framework
}
// namespace paddle_mobile
}
// namespace paddle_mobile
src/framework/tensor.h
浏览文件 @
aef98218
此差异已折叠。
点击以展开。
src/framework/tensor_util.cc
浏览文件 @
aef98218
此差异已折叠。
点击以展开。
src/framework/tensor_util.h
浏览文件 @
aef98218
...
@@ -20,47 +20,47 @@ limitations under the License. */
...
@@ -20,47 +20,47 @@ limitations under the License. */
#include <vector>
#include <vector>
namespace
paddle_mobile
{
namespace
paddle_mobile
{
namespace
framework
{
namespace
framework
{
void
TensorCopy
(
const
Tensor
&
src
,
Tensor
*
dst
);
void
TensorCopy
(
const
Tensor
&
src
,
Tensor
*
dst
);
void
TensorCopySync
(
const
Tensor
&
src
,
Tensor
*
dst
);
void
TensorCopySync
(
const
Tensor
&
src
,
Tensor
*
dst
);
template
<
typename
T
>
template
<
typename
T
>
void
TensorFromVector
(
const
std
::
vector
<
T
>
&
src
,
Tensor
*
dst
);
void
TensorFromVector
(
const
std
::
vector
<
T
>
&
src
,
Tensor
*
dst
);
template
<
typename
T
>
template
<
typename
T
>
void
TesnorToVector
(
const
Tensor
&
src
,
std
::
vector
<
T
>
*
dst
);
void
TesnorToVector
(
const
Tensor
&
src
,
std
::
vector
<
T
>
*
dst
);
bool
TensorContainsNAN
(
const
framework
::
Tensor
&
tensor
);
bool
TensorContainsNAN
(
const
framework
::
Tensor
&
tensor
);
bool
TensorContainsInf
(
const
framework
::
Tensor
&
tensor
);
bool
TensorContainsInf
(
const
framework
::
Tensor
&
tensor
);
void
TensorToStream
(
std
::
ostream
&
os
,
const
Tensor
&
tensor
);
void
TensorToStream
(
std
::
ostream
&
os
,
const
Tensor
&
tensor
);
void
TensorFromStream
(
std
::
istream
&
is
,
Tensor
*
tensor
);
void
TensorFromStream
(
std
::
istream
&
is
,
Tensor
*
tensor
);
//
//
// The implementation of template functions.
// The implementation of template functions.
//
//
template
<
typename
T
>
template
<
typename
T
>
void
TensorFromVector
(
const
std
::
vector
<
T
>
&
src
,
Tensor
*
dst
)
{
void
TensorFromVector
(
const
std
::
vector
<
T
>
&
src
,
Tensor
*
dst
)
{
auto
src_ptr
=
static_cast
<
const
void
*>
(
src
.
data
());
auto
src_ptr
=
static_cast
<
const
void
*>
(
src
.
data
());
dst
->
Resize
({
static_cast
<
int64_t
>
(
src
.
size
())});
dst
->
Resize
({
static_cast
<
int64_t
>
(
src
.
size
())});
auto
dst_ptr
=
static_cast
<
void
*>
(
dst
->
mutable_data
<
T
>
());
auto
dst_ptr
=
static_cast
<
void
*>
(
dst
->
mutable_data
<
T
>
());
auto
size
=
src
.
size
()
*
sizeof
(
T
);
auto
size
=
src
.
size
()
*
sizeof
(
T
);
memory
::
Copy
(
dst_ptr
,
src_ptr
,
size
);
memory
::
Copy
(
dst_ptr
,
src_ptr
,
size
);
}
}
template
<
typename
T
>
template
<
typename
T
>
void
TensorToVector
(
const
Tensor
&
src
,
std
::
vector
<
T
>
*
dst
)
{
void
TensorToVector
(
const
Tensor
&
src
,
std
::
vector
<
T
>
*
dst
)
{
auto
src_ptr
=
static_cast
<
const
void
*>
(
src
.
data
<
T
>
());
auto
src_ptr
=
static_cast
<
const
void
*>
(
src
.
data
<
T
>
());
auto
size
=
src
.
numel
()
*
sizeof
(
T
);
auto
size
=
src
.
numel
()
*
sizeof
(
T
);
dst
->
resize
(
src
.
numel
());
dst
->
resize
(
src
.
numel
());
auto
dst_ptr
=
static_cast
<
void
*>
(
dst
->
data
());
auto
dst_ptr
=
static_cast
<
void
*>
(
dst
->
data
());
memory
::
Copy
(
dst_ptr
,
src_ptr
,
size
);
memory
::
Copy
(
dst_ptr
,
src_ptr
,
size
);
}
}
}
// namespace framework
}
// namespace framework
}
// namespace paddle_mobile
}
// namespace paddle_mobile
src/framework/var_desc.cpp
浏览文件 @
aef98218
...
@@ -20,9 +20,9 @@ SOFTWARE.
...
@@ -20,9 +20,9 @@ SOFTWARE.
namespace
paddle_mobile
{
namespace
paddle_mobile
{
namespace
framework
{
namespace
framework
{
VarDesc
::
VarDesc
(
const
proto
::
VarDesc
&
desc
)
:
desc_
(
desc
)
{}
VarDesc
::
VarDesc
(
const
proto
::
VarDesc
&
desc
)
:
desc_
(
desc
)
{}
}
// namespace framework
}
// namespace framework
}
// namespace paddle_mobile
}
// namespace paddle_mobile
src/framework/var_desc.h
浏览文件 @
aef98218
此差异已折叠。
点击以展开。
src/framework/var_type.h
浏览文件 @
aef98218
...
@@ -23,16 +23,17 @@ SOFTWARE.
...
@@ -23,16 +23,17 @@ SOFTWARE.
#include "variable.h"
#include "variable.h"
namespace
paddle_mobile
{
namespace
paddle_mobile
{
namespace
framework
{
namespace
framework
{
inline
proto
::
VarType
::
Type
ToVarType
(
std
::
type_index
type
)
{
inline
proto
::
VarType
::
Type
ToVarType
(
std
::
type_index
type
)
{
if
(
type
.
hash_code
()
==
typeid
(
LoDTensor
).
hash_code
())
{
if
(
type
.
hash_code
()
==
typeid
(
LoDTensor
).
hash_code
())
{
return
proto
::
VarType_Type_LOD_TENSOR
;
return
proto
::
VarType_Type_LOD_TENSOR
;
}
else
if
(
type
.
hash_code
()
==
typeid
(
SelectedRows
).
hash_code
())
{
}
else
if
(
type
.
hash_code
()
==
typeid
(
SelectedRows
).
hash_code
())
{
return
proto
::
VarType_Type_SELECTED_ROWS
;
return
proto
::
VarType_Type_SELECTED_ROWS
;
}
else
{
}
else
{
// PADDLE_THROW("ToVarType:Unsupported type %s", type.name());
// PADDLE_THROW("ToVarType:Unsupported type %s",
}
// type.name());
}
}
}
}
// namespace framework
}
// namespace framework
}
// namespace paddle_mobile
}
// namespace paddle_mobile
src/framework/variable.h
浏览文件 @
aef98218
此差异已折叠。
点击以展开。
src/io.cpp
浏览文件 @
aef98218
此差异已折叠。
点击以展开。
src/io.h
浏览文件 @
aef98218
...
@@ -27,13 +27,14 @@ SOFTWARE.
...
@@ -27,13 +27,14 @@ SOFTWARE.
namespace
paddle_mobile
{
namespace
paddle_mobile
{
template
<
typename
Dtype
,
Precision
P
=
Precision
::
FP32
>
template
<
typename
Dtype
,
Precision
P
=
Precision
::
FP32
>
class
Loader
:
PaddleMobileObject
{
class
Loader
:
PaddleMobileObject
{
public:
public:
const
framework
::
Program
<
Dtype
,
P
>
Load
(
const
std
::
string
&
dirname
);
const
framework
::
Program
<
Dtype
,
P
>
Load
(
const
std
::
string
&
dirname
);
private:
private:
void
LoadVar
(
framework
::
LoDTensor
*
tensor
,
const
std
::
string
&
file_path
);
void
LoadVar
(
framework
::
LoDTensor
*
tensor
,
};
const
std
::
string
&
file_path
);
};
}
// namespace paddle_mobile
}
// namespace paddle_mobile
src/memory/t_malloc.cc
浏览文件 @
aef98218
...
@@ -22,30 +22,30 @@ SOFTWARE.
...
@@ -22,30 +22,30 @@ SOFTWARE.
#include <cstring>
#include <cstring>
namespace
paddle_mobile
{
namespace
paddle_mobile
{
namespace
memory
{
namespace
memory
{
const
int
MALLOC_ALIGN
=
16
;
const
int
MALLOC_ALIGN
=
16
;
void
Copy
(
void
*
dst
,
const
void
*
src
,
size_t
num
)
{
void
Copy
(
void
*
dst
,
const
void
*
src
,
size_t
num
)
{
std
::
memcpy
(
dst
,
src
,
num
);
std
::
memcpy
(
dst
,
src
,
num
);
};
};
void
*
Alloc
(
size_t
size
)
{
void
*
Alloc
(
size_t
size
)
{
size_t
offset
=
sizeof
(
void
*
)
+
MALLOC_ALIGN
-
1
;
size_t
offset
=
sizeof
(
void
*
)
+
MALLOC_ALIGN
-
1
;
char
*
p
=
static_cast
<
char
*>
(
malloc
(
offset
+
size
));
char
*
p
=
static_cast
<
char
*>
(
malloc
(
offset
+
size
));
if
(
!
p
)
{
if
(
!
p
)
{
return
nullptr
;
return
nullptr
;
}
}
void
*
r
=
reinterpret_cast
<
void
*>
(
reinterpret_cast
<
size_t
>
(
p
+
offset
)
&
void
*
r
=
reinterpret_cast
<
void
*>
(
(
~
(
MALLOC_ALIGN
-
1
)));
reinterpret_cast
<
size_t
>
(
p
+
offset
)
&
(
~
(
MALLOC_ALIGN
-
1
)));
static_cast
<
void
**>
(
r
)[
-
1
]
=
p
;
static_cast
<
void
**>
(
r
)[
-
1
]
=
p
;
return
r
;
return
r
;
}
}
void
Free
(
void
*
ptr
)
{
void
Free
(
void
*
ptr
)
{
if
(
ptr
)
{
if
(
ptr
)
{
free
(
static_cast
<
void
**>
(
ptr
)[
-
1
]);
free
(
static_cast
<
void
**>
(
ptr
)[
-
1
]);
}
}
}
}
}
// namespace memory
}
// namespace memory
}
// namespace paddle_mobile
}
// namespace paddle_mobile
src/memory/t_malloc.h
浏览文件 @
aef98218
此差异已折叠。
点击以展开。
src/operators/conv_op.cpp
浏览文件 @
aef98218
此差异已折叠。
点击以展开。
src/operators/conv_op.h
浏览文件 @
aef98218
此差异已折叠。
点击以展开。
src/operators/kernel/arm/conv_kernel.cpp
浏览文件 @
aef98218
此差异已折叠。
点击以展开。
src/operators/kernel/conv_kernel.h
浏览文件 @
aef98218
此差异已折叠。
点击以展开。
src/operators/kernel/fpga/conv_kernel.cpp
浏览文件 @
aef98218
此差异已折叠。
点击以展开。
src/operators/math/im2col.cc
浏览文件 @
aef98218
此差异已折叠。
点击以展开。
src/operators/math/im2col.h
浏览文件 @
aef98218
此差异已折叠。
点击以展开。
src/operators/math/math_function.cc
浏览文件 @
aef98218
此差异已折叠。
点击以展开。
src/operators/math/math_function.h
浏览文件 @
aef98218
此差异已折叠。
点击以展开。
src/operators/math/vol2col.cc
浏览文件 @
aef98218
此差异已折叠。
点击以展开。
src/operators/math/vol2col.h
浏览文件 @
aef98218
此差异已折叠。
点击以展开。
src/operators/op_param.cpp
浏览文件 @
aef98218
此差异已折叠。
点击以展开。
src/operators/op_param.h
浏览文件 @
aef98218
此差异已折叠。
点击以展开。
src/platform/data_type.h
浏览文件 @
aef98218
此差异已折叠。
点击以展开。
src/platform/macros.h
浏览文件 @
aef98218
...
@@ -17,9 +17,9 @@ limitations under the License. */
...
@@ -17,9 +17,9 @@ limitations under the License. */
// Disable the copy and assignment operator for a class.
// Disable the copy and assignment operator for a class.
#ifndef DISABLE_COPY_AND_ASSIGN
#ifndef DISABLE_COPY_AND_ASSIGN
#define DISABLE_COPY_AND_ASSIGN(classname) \
#define DISABLE_COPY_AND_ASSIGN(classname) \
private:
\
private:
\
classname(const classname &) = delete;
\
classname(const classname &) = delete;
\
classname(classname &&) = delete;
\
classname(classname &&) = delete;
\
classname &operator=(const classname &) = delete;
\
classname &operator=(const classname &) = delete;
\
classname &operator=(classname &&) = delete
classname &operator=(classname &&) = delete
#endif
#endif
android-cmake/android.toolchain.cmake
→
tools/
android-cmake/android.toolchain.cmake
浏览文件 @
aef98218
文件已移动
ios-cmake/ios.toolchain.cmake
→
tools/
ios-cmake/ios.toolchain.cmake
浏览文件 @
aef98218
文件已移动
.clang_format.hook
→
tools/pre-commit.hooks/
.clang_format.hook
浏览文件 @
aef98218
文件已移动
tools/pre-commit.hooks/.copyright.hook
0 → 100644
浏览文件 @
aef98218
此差异已折叠。
点击以展开。
tools/pre-commit.hooks/clang-format.bash
0 → 100755
浏览文件 @
aef98218
此差异已折叠。
点击以展开。
tools/pre-commit.hooks/copyright.py
0 → 100755
浏览文件 @
aef98218
此差异已折叠。
点击以展开。
tools/pre-commit.hooks/cpplint.bash
0 → 100755
浏览文件 @
aef98218
此差异已折叠。
点击以展开。
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录