Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleX
提交
4604d38f
P
PaddleX
项目概览
PaddlePaddle
/
PaddleX
通知
138
Star
4
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
43
列表
看板
标记
里程碑
合并请求
5
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleX
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
43
Issue
43
列表
看板
标记
里程碑
合并请求
5
合并请求
5
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
4604d38f
编写于
5月 25, 2020
作者:
J
Jason
提交者:
GitHub
5月 25, 2020
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #102 from Channingss/develop
fix Lite docs & add cpp code style hooks
上级
10da38e3
694615a1
变更
6
隐藏空白更改
内联
并排
Showing
6 changed file
with
69 addition
and
13 deletion
+69
-13
.pre-commit-config.yaml
.pre-commit-config.yaml
+1
-1
deploy/cpp/include/paddlex/results.h
deploy/cpp/include/paddlex/results.h
+2
-1
deploy/lite/export_lite.py
deploy/lite/export_lite.py
+8
-8
docs/tutorials/deploy/deploy_lite.md
docs/tutorials/deploy/deploy_lite.md
+16
-3
tools/codestyle/clang_format.hook
tools/codestyle/clang_format.hook
+15
-0
tools/codestyle/cpplint_pre_commit.hook
tools/codestyle/cpplint_pre_commit.hook
+27
-0
未找到文件。
.pre-commit-config.yaml
浏览文件 @
4604d38f
...
...
@@ -35,6 +35,6 @@
-
id
:
cpplint-cpp-source
name
:
cpplint
description
:
Check C++ code style using cpplint.py.
entry
:
bash cpplint_pre_commit.hook
entry
:
bash
./tools/codestyle/
cpplint_pre_commit.hook
language
:
system
files
:
\.(c|cc|cxx|cpp|cu|h|hpp|hxx)$
deploy/cpp/include/paddlex/results.h
浏览文件 @
4604d38f
...
...
@@ -63,9 +63,10 @@ class SegResult : public BaseResult {
public:
Mask
<
int64_t
>
label_map
;
Mask
<
float
>
score_map
;
std
::
string
type
=
"seg"
;
void
clear
()
{
label_map
.
clear
();
score_map
.
clear
();
}
};
}
// namesp
ce of
PaddleX
}
// namesp
ace
PaddleX
deploy/lite/export_lite.py
浏览文件 @
4604d38f
...
...
@@ -19,30 +19,30 @@ import argparse
def
export_lite
():
opt
=
lite
.
Opt
()
model_file
=
os
.
path
.
join
(
FLAGS
.
model_
path
,
'__model__'
)
params_file
=
os
.
path
.
join
(
FLAGS
.
model_
path
,
'__params__'
)
opt
.
run_optimize
(
""
,
model_file
,
params_file
,
FLAGS
.
place
,
FLAGS
.
save_
dir
)
model_file
=
os
.
path
.
join
(
FLAGS
.
model_
dir
,
'__model__'
)
params_file
=
os
.
path
.
join
(
FLAGS
.
model_
dir
,
'__params__'
)
opt
.
run_optimize
(
""
,
model_file
,
params_file
,
FLAGS
.
place
,
FLAGS
.
save_
file
)
if
__name__
==
'__main__'
:
parser
=
argparse
.
ArgumentParser
(
description
=
__doc__
)
parser
.
add_argument
(
"--model_
path
"
,
"--model_
dir
"
,
type
=
str
,
default
=
""
,
help
=
"
model path
."
,
help
=
"
path of '__model__' and '__params__'
."
,
required
=
True
)
parser
.
add_argument
(
"--place"
,
type
=
str
,
default
=
"arm"
,
help
=
"
preprocess config path
."
,
help
=
"
run place: 'arm|opencl|x86|npu|xpu|rknpu|apu'
."
,
required
=
True
)
parser
.
add_argument
(
"--save_
dir
"
,
"--save_
file
"
,
type
=
str
,
default
=
"paddlex.onnx"
,
help
=
"
Directory for storing the output visualization
files."
,
help
=
"
file name for storing the output
files."
,
required
=
True
)
FLAGS
=
parser
.
parse_args
()
export_lite
()
docs/tutorials/deploy/deploy_lite.md
浏览文件 @
4604d38f
# 移动端部署
PaddleX的移动端部署由PaddleLite实现,部署的流程如下,首先将训练好的模型导出为inference model,然后使用PaddleLite的python接口对模型进行优化,最后使用PaddleLite的预测库进行部署,
PaddleLite的详细介绍和使用可参考:
[
PaddleLite文档
](
https://paddle-lite.readthedocs.io/zh/latest/
)
> PaddleX --> Inference Model --> PaddleLite Opt --> PaddleLite Inference
以下介绍如何将PaddleX导出为inference model,然后使用PaddleLite的OPT模块对模型进行优化:
step 1: 安装PaddleLite
```
...
...
@@ -9,15 +16,21 @@ pip install paddlelite
step 2: 将PaddleX模型导出为inference模型
参考
[
导出inference模型
](
deploy_server/deploy_python.html#inference
)
将模型导出为inference格式模型。
**注意:由于PaddleX代码的持续更新,版本低于1.0.0的模型暂时无法直接用于预测部署,参考[模型版本升级](.
.
/upgrade_version.md)对模型版本进行升级。**
**注意:由于PaddleX代码的持续更新,版本低于1.0.0的模型暂时无法直接用于预测部署,参考[模型版本升级](./upgrade_version.md)对模型版本进行升级。**
step 3: 将inference模型转换成PaddleLite模型
```
python /path/to/PaddleX/deploy/lite/export_lite.py --model_path /path/to/inference_model --save_dir /path/to/onnx_model
python /path/to/PaddleX/deploy/lite/export_lite.py --model_dir /path/to/inference_model --save_file /path/to/onnx_model --place place/to/run
```
`--model_path`
用于指定inference模型的路径,
`--save_dir`
用于指定Lite模型的保存路径。
| 参数 | 说明 |
| ---- | ---- |
| model_dir | 预测模型所在路径,包含"__model__", "__params__"文件 |
| save_file | 模型输出的名称,默认为"paddlex.nb" |
| place | 运行的平台,可选:arm|opencl|x86|npu|xpu|rknpu|apu |
step 4: 预测
...
...
tools/codestyle/clang_format.hook
0 → 100755
浏览文件 @
4604d38f
#!/bin/bash
set
-e
readonly
VERSION
=
"3.8"
version
=
$(
clang-format
-version
)
if
!
[[
$version
==
*
"
$VERSION
"
*
]]
;
then
echo
"clang-format version check failed."
echo
"a version contains '
$VERSION
' is needed, but get '
$version
'"
echo
"you can install the right version, and make an soft-link to '
\$
PATH' env"
exit
-1
fi
clang-format
$@
tools/codestyle/cpplint_pre_commit.hook
0 → 100755
浏览文件 @
4604d38f
#!/bin/bash
TOTAL_ERRORS
=
0
if
[[
!
$TRAVIS_BRANCH
]]
;
then
# install cpplint on local machine.
if
[[
!
$(
which cpplint
)
]]
;
then
pip
install
cpplint
fi
# diff files on local machine.
files
=
$(
git diff
--cached
--name-status
|
awk
'$1 != "D" {print $2}'
)
else
# diff files between PR and latest commit on Travis CI.
branch_ref
=
$(
git rev-parse
"
$TRAVIS_BRANCH
"
)
head_ref
=
$(
git rev-parse HEAD
)
files
=
$(
git diff
--name-status
$branch_ref
$head_ref
|
awk
'$1 != "D" {print $2}'
)
fi
# The trick to remove deleted files: https://stackoverflow.com/a/2413151
for
file
in
$files
;
do
if
[[
$file
=
~ ^
(
patches/.
*
)
]]
;
then
continue
;
else
cpplint
--filter
=
-readability
/fn_size
$file
;
TOTAL_ERRORS
=
$(
expr
$TOTAL_ERRORS
+
$?
)
;
fi
done
exit
$TOTAL_ERRORS
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录