Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
s920243400
PaddleDetection
提交
a5bfed37
P
PaddleDetection
项目概览
s920243400
/
PaddleDetection
与 Fork 源项目一致
Fork自
PaddlePaddle / PaddleDetection
通知
2
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleDetection
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
a5bfed37
编写于
12月 10, 2018
作者:
N
nhzlx
浏览文件
操作
浏览文件
下载
差异文件
Merge branch 'develop' of
https://github.com/paddlepaddle/paddle
into add_benchmark_for_trt
test=develop
上级
afc51e6f
bc6d0a34
变更
4
显示空白变更内容
内联
并排
Showing
4 changed file
with
55 addition
and
25 deletion
+55
-25
README.md
README.md
+11
-11
paddle/fluid/framework/data_layout_transform.cc
paddle/fluid/framework/data_layout_transform.cc
+11
-8
paddle/fluid/inference/tensorrt/convert/pool2d_op.cc
paddle/fluid/inference/tensorrt/convert/pool2d_op.cc
+6
-2
python/paddle/fluid/tests/unittests/test_dist_base.py
python/paddle/fluid/tests/unittests/test_dist_base.py
+27
-4
未找到文件。
README.md
浏览文件 @
a5bfed37
...
@@ -2,8 +2,8 @@
...
@@ -2,8 +2,8 @@
[
![Build Status
](
https://travis-ci.org/PaddlePaddle/Paddle.svg?branch=develop
)
](https://travis-ci.org/PaddlePaddle/Paddle)
[
![Build Status
](
https://travis-ci.org/PaddlePaddle/Paddle.svg?branch=develop
)
](https://travis-ci.org/PaddlePaddle/Paddle)
[
![Documentation Status
](
https://img.shields.io/badge/docs-latest-brightgreen.svg?style=flat
)
](http://paddlepaddle.org/documentation/docs/en/1.
1
/getstarted/index_en.html)
[
![Documentation Status
](
https://img.shields.io/badge/docs-latest-brightgreen.svg?style=flat
)
](http://paddlepaddle.org/documentation/docs/en/1.
2
/getstarted/index_en.html)
[
![Documentation Status
](
https://img.shields.io/badge/中文文档-最新-brightgreen.svg
)
](http://paddlepaddle.org/documentation/docs/zh/1.
1
/beginners_guide/index.html)
[
![Documentation Status
](
https://img.shields.io/badge/中文文档-最新-brightgreen.svg
)
](http://paddlepaddle.org/documentation/docs/zh/1.
2
/beginners_guide/index.html)
[
![Release
](
https://img.shields.io/github/release/PaddlePaddle/Paddle.svg
)
](https://github.com/PaddlePaddle/Paddle/releases)
[
![Release
](
https://img.shields.io/github/release/PaddlePaddle/Paddle.svg
)
](https://github.com/PaddlePaddle/Paddle/releases)
[
![License
](
https://img.shields.io/badge/license-Apache%202-blue.svg
)
](LICENSE)
[
![License
](
https://img.shields.io/badge/license-Apache%202-blue.svg
)
](LICENSE)
...
@@ -19,7 +19,7 @@ Our vision is to enable deep learning for everyone via PaddlePaddle.
...
@@ -19,7 +19,7 @@ Our vision is to enable deep learning for everyone via PaddlePaddle.
Please refer to our
[
release announcement
](
https://github.com/PaddlePaddle/Paddle/releases
)
to track the latest feature of PaddlePaddle.
Please refer to our
[
release announcement
](
https://github.com/PaddlePaddle/Paddle/releases
)
to track the latest feature of PaddlePaddle.
### Latest PaddlePaddle Release: [Fluid 1.
1.0](https://github.com/PaddlePaddle/Paddle/tree/release/1.1
)
### Latest PaddlePaddle Release: [Fluid 1.
2.0](https://github.com/PaddlePaddle/Paddle/tree/release/1.2
)
### Install Latest Stable Release:
### Install Latest Stable Release:
```
```
# Linux CPU
# Linux CPU
...
@@ -27,9 +27,9 @@ pip install paddlepaddle
...
@@ -27,9 +27,9 @@ pip install paddlepaddle
# Linux GPU cuda9cudnn7
# Linux GPU cuda9cudnn7
pip install paddlepaddle-gpu
pip install paddlepaddle-gpu
# Linux GPU cuda8cudnn7
# Linux GPU cuda8cudnn7
pip install paddlepaddle-gpu==1.
1
.0.post87
pip install paddlepaddle-gpu==1.
2
.0.post87
# Linux GPU cuda8cudnn5
# Linux GPU cuda8cudnn5
pip install paddlepaddle-gpu==1.
1
.0.post85
pip install paddlepaddle-gpu==1.
2
.0.post85
# For installation on other platform, refer to http://paddlepaddle.org/
# For installation on other platform, refer to http://paddlepaddle.org/
```
```
...
@@ -76,26 +76,26 @@ pip install paddlepaddle-gpu==1.1.0.post85
...
@@ -76,26 +76,26 @@ pip install paddlepaddle-gpu==1.1.0.post85
## Installation
## Installation
It is recommended to read
[
this doc
](
http://paddlepaddle.org/documentation/docs/zh/1.
1/beginners_guide/index
.html
)
on our website.
It is recommended to read
[
this doc
](
http://paddlepaddle.org/documentation/docs/zh/1.
2/beginners_guide/install/index_cn
.html
)
on our website.
## Documentation
## Documentation
We provide
[
English
](
http://paddlepaddle.org/documentation/docs/en/1.
1
/getstarted/index_en.html
)
and
We provide
[
English
](
http://paddlepaddle.org/documentation/docs/en/1.
2
/getstarted/index_en.html
)
and
[
Chinese
](
http://paddlepaddle.org/documentation/docs/zh/1.
1
/beginners_guide/index.html
)
documentation.
[
Chinese
](
http://paddlepaddle.org/documentation/docs/zh/1.
2
/beginners_guide/index.html
)
documentation.
-
[
Deep Learning 101
](
https://github.com/PaddlePaddle/book
)
-
[
Deep Learning 101
](
https://github.com/PaddlePaddle/book
)
You might want to start from this online interactive book that can run in a Jupyter Notebook.
You might want to start from this online interactive book that can run in a Jupyter Notebook.
-
[
Distributed Training
](
http://paddlepaddle.org/documentation/docs/zh/1.
1
/user_guides/howto/training/cluster_howto.html
)
-
[
Distributed Training
](
http://paddlepaddle.org/documentation/docs/zh/1.
2
/user_guides/howto/training/cluster_howto.html
)
You can run distributed training jobs on MPI clusters.
You can run distributed training jobs on MPI clusters.
-
[
Python API
](
http://paddlepaddle.org/documentation/
api/zh/1.1/fluid
.html
)
-
[
Python API
](
http://paddlepaddle.org/documentation/
docs/zh/1.2/api_cn/index_cn
.html
)
Our new API enables much shorter programs.
Our new API enables much shorter programs.
-
[
How to Contribute
](
http://paddlepaddle.org/documentation/docs/zh/1.
1/advanced_usage/development/contribute_to_paddle
.html
)
-
[
How to Contribute
](
http://paddlepaddle.org/documentation/docs/zh/1.
2/advanced_usage/development/contribute_to_paddle/index_cn
.html
)
We appreciate your contributions!
We appreciate your contributions!
...
...
paddle/fluid/framework/data_layout_transform.cc
浏览文件 @
a5bfed37
...
@@ -151,19 +151,22 @@ void TransDataLayoutFromMKLDNN(const OpKernelType& kernel_type_for_var,
...
@@ -151,19 +151,22 @@ void TransDataLayoutFromMKLDNN(const OpKernelType& kernel_type_for_var,
auto
out_format
=
auto
out_format
=
platform
::
MKLDNNFormatForSize
(
in_tz
.
size
(),
ToMKLDNNFormat
(
out_layout
));
platform
::
MKLDNNFormatForSize
(
in_tz
.
size
(),
ToMKLDNNFormat
(
out_layout
));
void
*
in_data
=
GetDataFromTensor
(
in
,
in_type
);
// output tensor has the same dims as input. Reorder don't change dims
// output tensor has the same dims as input. Reorder don't change dims
out
->
Resize
(
in
.
dims
());
out
->
Resize
(
in
.
dims
());
if
(
in_format
!=
out_format
)
{
void
*
in_data
=
GetDataFromTensor
(
in
,
in_type
);
auto
out_data
=
out
->
mutable_data
(
expected_kernel_type
.
place_
,
in
.
type
());
auto
out_data
=
out
->
mutable_data
(
expected_kernel_type
.
place_
,
in
.
type
());
auto
in_memory
=
memory
({{{
in_tz
},
in_type
,
in_format
},
cpu_engine
},
in_data
);
auto
in_memory
=
memory
({{{
in_tz
},
in_type
,
in_format
},
cpu_engine
},
in_data
);
auto
out_memory
=
auto
out_memory
=
memory
({{{
out_tz
},
out_type
,
out_format
},
cpu_engine
},
out_data
);
memory
({{{
out_tz
},
out_type
,
out_format
},
cpu_engine
},
out_data
);
platform
::
Reorder
(
in_memory
,
out_memory
);
platform
::
Reorder
(
in_memory
,
out_memory
);
}
else
{
out
->
ShareDataWith
(
in
);
}
out
->
set_layout
(
out_layout
);
out
->
set_layout
(
out_layout
);
// reset format since the out tensor will be feed to non-MKLDNN OPkernel
// reset format since the out tensor will be feed to non-MKLDNN OPkernel
out
->
set_format
(
memory
::
format
::
format_undef
);
out
->
set_format
(
memory
::
format
::
format_undef
);
...
...
paddle/fluid/inference/tensorrt/convert/pool2d_op.cc
浏览文件 @
a5bfed37
...
@@ -109,8 +109,12 @@ class Pool2dOpConverter : public OpConverter {
...
@@ -109,8 +109,12 @@ class Pool2dOpConverter : public OpConverter {
}
}
if
(
pool_type
==
"max"
)
{
if
(
pool_type
==
"max"
)
{
nvinfer1
::
DimsHW
pre_pad
(
paddings
[
0
],
paddings
[
1
]);
// Under ceil mode, the pre_pad and post_pad are used to
nvinfer1
::
DimsHW
post_pad
(
paddings
[
0
],
paddings
[
1
]);
// record the the padding size. In some ceil mode cases,
// we do not need padding, so we initialize the two vars to 0.
nvinfer1
::
DimsHW
pre_pad
(
0
,
0
);
nvinfer1
::
DimsHW
post_pad
(
0
,
0
);
if
(
ceil_mode
)
{
if
(
ceil_mode
)
{
// If ceil mode is true, we will pad the appropriate size to the input.
// If ceil mode is true, we will pad the appropriate size to the input.
DealCeilMode
(
input_shape
,
ksize
,
strides
,
paddings
,
&
pre_pad
,
&
post_pad
,
DealCeilMode
(
input_shape
,
ksize
,
strides
,
paddings
,
&
pre_pad
,
&
post_pad
,
...
...
python/paddle/fluid/tests/unittests/test_dist_base.py
浏览文件 @
a5bfed37
...
@@ -378,6 +378,18 @@ class TestDistBase(unittest.TestCase):
...
@@ -378,6 +378,18 @@ class TestDistBase(unittest.TestCase):
stderr
=
tr1_pipe
,
stderr
=
tr1_pipe
,
env
=
env1
)
env
=
env1
)
# Wait until trainer process terminate
while
True
:
stat0
=
tr0_proc
.
poll
()
time
.
sleep
(
0.1
)
if
stat0
is
not
None
:
break
while
True
:
stat1
=
tr1_proc
.
poll
()
time
.
sleep
(
0.1
)
if
stat1
is
not
None
:
break
tr0_out
,
tr0_err
=
tr0_proc
.
communicate
()
tr0_out
,
tr0_err
=
tr0_proc
.
communicate
()
tr1_out
,
tr1_err
=
tr1_proc
.
communicate
()
tr1_out
,
tr1_err
=
tr1_proc
.
communicate
()
...
@@ -390,11 +402,21 @@ class TestDistBase(unittest.TestCase):
...
@@ -390,11 +402,21 @@ class TestDistBase(unittest.TestCase):
ps0
.
terminate
()
ps0
.
terminate
()
ps1
.
terminate
()
ps1
.
terminate
()
# print server log
with
open
(
"/tmp/ps0_err.log"
,
"r"
)
as
fn
:
sys
.
stderr
.
write
(
"ps0 stderr: %s
\n
"
%
fn
.
read
())
with
open
(
"/tmp/ps1_err.log"
,
"r"
)
as
fn
:
sys
.
stderr
.
write
(
"ps1 stderr: %s
\n
"
%
fn
.
read
())
# print log
# print log
if
stat0
==
0
:
sys
.
stderr
.
write
(
'trainer 0 stdout: %s
\n
'
%
pickle
.
loads
(
tr0_out
))
sys
.
stderr
.
write
(
'trainer 0 stdout: %s
\n
'
%
pickle
.
loads
(
tr0_out
))
sys
.
stderr
.
write
(
'trainer 0 stderr: %s
\n
'
%
tr0_err
)
with
open
(
"/tmp/tr0_err.log"
,
"r"
)
as
fn
:
sys
.
stderr
.
write
(
'trainer 0 stderr: %s
\n
'
%
fn
.
read
())
if
stat1
==
0
:
sys
.
stderr
.
write
(
'trainer 1 stdout: %s
\n
'
%
pickle
.
loads
(
tr1_out
))
sys
.
stderr
.
write
(
'trainer 1 stdout: %s
\n
'
%
pickle
.
loads
(
tr1_out
))
sys
.
stderr
.
write
(
'trainer 1 stderr: %s
\n
'
%
tr1_err
)
with
open
(
"/tmp/tr1_err.log"
,
"r"
)
as
fn
:
sys
.
stderr
.
write
(
'trainer 1 stderr: %s
\n
'
%
fn
.
read
())
return
pickle
.
loads
(
tr0_out
),
pickle
.
loads
(
tr1_out
)
return
pickle
.
loads
(
tr0_out
),
pickle
.
loads
(
tr1_out
)
...
@@ -474,6 +496,7 @@ class TestDistBase(unittest.TestCase):
...
@@ -474,6 +496,7 @@ class TestDistBase(unittest.TestCase):
"PYTHONPATH"
:
os
.
getenv
(
"PYTHONPATH"
,
""
),
"PYTHONPATH"
:
os
.
getenv
(
"PYTHONPATH"
,
""
),
"LD_LIBRARY_PATH"
:
os
.
getenv
(
"LD_LIBRARY_PATH"
,
""
),
"LD_LIBRARY_PATH"
:
os
.
getenv
(
"LD_LIBRARY_PATH"
,
""
),
"FLAGS_fraction_of_gpu_memory_to_use"
:
"0.15"
,
"FLAGS_fraction_of_gpu_memory_to_use"
:
"0.15"
,
"FLAGS_rpc_deadline"
:
"5000"
,
# 5sec to fail fast
"FLAGS_cudnn_deterministic"
:
"1"
,
"FLAGS_cudnn_deterministic"
:
"1"
,
"http_proxy"
:
""
,
"http_proxy"
:
""
,
"NCCL_P2P_DISABLE"
:
"1"
"NCCL_P2P_DISABLE"
:
"1"
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录