Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
机器未来
Paddle
提交
ba0fe0a8
P
Paddle
项目概览
机器未来
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1
Issue
1
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
ba0fe0a8
编写于
11月 06, 2020
作者:
I
iducn
提交者:
GitHub
11月 06, 2020
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
revert the modified shell script (#28453)
上级
bd8dfe38
变更
13
隐藏空白更改
内联
并排
Showing
13 changed file
with
131 addition
and
154 deletion
+131
-154
paddle/.set_port.sh
paddle/.set_port.sh
+3
-3
paddle/.set_python_path.sh
paddle/.set_python_path.sh
+3
-5
paddle/fluid/inference/api/demo_ci/clean.sh
paddle/fluid/inference/api/demo_ci/clean.sh
+1
-2
paddle/fluid/inference/api/demo_ci/run.sh
paddle/fluid/inference/api/demo_ci/run.sh
+85
-90
paddle/fluid/inference/check_symbol.sh
paddle/fluid/inference/check_symbol.sh
+6
-6
paddle/fluid/train/demo/clean.sh
paddle/fluid/train/demo/clean.sh
+1
-1
paddle/fluid/train/demo/run.sh
paddle/fluid/train/demo/run.sh
+6
-6
paddle/fluid/train/imdb_demo/run.sh
paddle/fluid/train/imdb_demo/run.sh
+1
-1
paddle/scripts/paddle_docker_build.sh
paddle/scripts/paddle_docker_build.sh
+16
-16
tools/cudaError/start.sh
tools/cudaError/start.sh
+2
-2
tools/dockerfile/build_scripts/install_nccl2.sh
tools/dockerfile/build_scripts/install_nccl2.sh
+2
-2
tools/gen_alias_mapping.sh
tools/gen_alias_mapping.sh
+2
-2
tools/manylinux1/build_scripts/install_nccl2.sh
tools/manylinux1/build_scripts/install_nccl2.sh
+3
-18
未找到文件。
paddle/.set_port.sh
浏览文件 @
ba0fe0a8
...
...
@@ -13,6 +13,6 @@
# See the License for the specific language governing permissions and
# limitations under the License.
DIRNAME
=
"
$(
dirname
"
$0
"
)
"
s
h
"
$DIRNAME
"
/.common_test_util.sh
set_port
"
$@
"
DIRNAME
=
`
dirname
$0
`
s
ource
$DIRNAME
/.common_test_util.sh
set_port
$@
paddle/.set_python_path.sh
浏览文件 @
ba0fe0a8
...
...
@@ -24,14 +24,12 @@
PYPATH
=
""
set
-x
while
getopts
"d:"
opt
;
do
case
"
$opt
"
in
case
$opt
in
d
)
PYPATH
=
$OPTARG
;;
*
)
;;
esac
done
shift
$((
"
$OPTIND
"
-
1
))
shift
$((
$OPTIND
-
1
))
export
PYTHONPATH
=
$PYPATH
:
$PYTHONPATH
"
$@
"
$@
paddle/fluid/inference/api/demo_ci/clean.sh
浏览文件 @
ba0fe0a8
#!/bin/bash
set
-x
cd
"
$(
dirname
"
$0
"
)
"
||
exit
cd
`
dirname
$0
`
rm
-rf
build/ data/
set
+x
paddle/fluid/inference/api/demo_ci/run.sh
浏览文件 @
ba0fe0a8
#!/bin/bash
set
-x
PADDLE_ROOT
=
"
$1
"
TURN_ON_MKL
=
"
$2
"
# use MKL or Openblas
TEST_GPU_CPU
=
"
$3
"
# test both GPU/CPU mode or only CPU mode
DATA_DIR
=
"
$4
"
# dataset
TENSORRT_INCLUDE_DIR
=
"
$5
"
# TensorRT header file dir, default to /usr/local/TensorRT/include
TENSORRT_LIB_DIR
=
"
$6
"
# TensorRT lib file dir, default to /usr/local/TensorRT/lib
MSVC_STATIC_CRT
=
"
$7
"
inference_install_dir
=
"
${
PADDLE_ROOT
}
"
/build/paddle_inference_install_dir
PADDLE_ROOT
=
$1
TURN_ON_MKL
=
$2
# use MKL or Openblas
TEST_GPU_CPU
=
$3
# test both GPU/CPU mode or only CPU mode
DATA_DIR
=
$4
# dataset
TENSORRT_INCLUDE_DIR
=
$5
# TensorRT header file dir, default to /usr/local/TensorRT/include
TENSORRT_LIB_DIR
=
$6
# TensorRT lib file dir, default to /usr/local/TensorRT/lib
MSVC_STATIC_CRT
=
$7
inference_install_dir
=
${
PADDLE_ROOT
}
/build/paddle_inference_install_dir
cd
"
$(
dirname
"
$0
"
)
"
||
exit
current_dir
=
$(
pwd
)
if
[
"
$2
"
==
ON
]
;
then
cd
`
dirname
$0
`
current_dir
=
`
pwd
`
if
[
$2
==
ON
]
;
then
# You can export yourself if move the install path
MKL_LIB
=
"
${
inference_install_dir
}
"
/third_party/install/mklml/lib
export
LD_LIBRARY_PATH
=
"
$LD_LIBRARY_PATH
"
:
"
${
MKL_LIB
}
"
MKL_LIB
=
${
inference_install_dir
}
/third_party/install/mklml/lib
export
LD_LIBRARY_PATH
=
$LD_LIBRARY_PATH
:
${
MKL_LIB
}
fi
if
[
"
$3
"
==
ON
]
;
then
if
[
$3
==
ON
]
;
then
use_gpu_list
=
'true false'
else
use_gpu_list
=
'false'
fi
USE_TENSORRT
=
OFF
if
[
-d
"
$TENSORRT_INCLUDE_DIR
"
]
&&
[
-d
"
$TENSORRT_LIB_DIR
"
]
;
then
if
[
-d
"
$TENSORRT_INCLUDE_DIR
"
-a
-d
"
$TENSORRT_LIB_DIR
"
]
;
then
USE_TENSORRT
=
ON
fi
...
...
@@ -32,79 +32,77 @@ URL_ROOT=http://paddlemodels.bj.bcebos.com/${PREFIX}
# download vis_demo data
function
download
()
{
dir_name
=
"
$1
"
mkdir
-p
"
$dir_name
"
cd
"
$dir_name
"
||
exit
dir_name
=
$1
mkdir
-p
$dir_name
cd
$dir_name
if
[[
-e
"
${
PREFIX
}${
dir_name
}
.tar.gz"
]]
;
then
echo
"
${
PREFIX
}${
dir_name
}
.tar.gz has been downloaded."
else
wget
-q
"
${
URL_ROOT
}
""
$dir_name
"
.tar.gz
tar
xzf
./
*
.tar.gz
wget
-q
${
URL_ROOT
}
$dir_name
.tar.gz
tar
xzf
*
.tar.gz
fi
cd
..
||
exit
cd
..
}
mkdir
-p
"
$DATA_DIR
"
cd
"
$DATA_DIR
"
||
exit
mkdir
-p
$DATA_DIR
cd
$DATA_DIR
vis_demo_list
=
'se_resnext50 ocr mobilenet'
for
vis_demo_name
in
$vis_demo_list
;
do
download
"
$vis_demo_name
"
download
$vis_demo_name
done
# download word2vec data
mkdir
-p
word2vec
cd
word2vec
||
exit
cd
word2vec
if
[[
-e
"word2vec.inference.model.tar.gz"
]]
;
then
echo
"word2vec.inference.model.tar.gz has been downloaded."
else
wget
-q
http://paddle-inference-dist.bj.bcebos.com/word2vec.inference.model.tar.gz
tar
xzf
./
*
.tar.gz
tar
xzf
*
.tar.gz
fi
# compile and test the demo
cd
"
$current_dir
"
||
exit
cd
$current_dir
mkdir
-p
build
cd
build
||
exit
rm
-rf
./
*
cd
build
rm
-rf
*
for
WITH_STATIC_LIB
in
ON OFF
;
do
if
[
"
$(
uname
|
grep
Win
)
"
!=
""
]
;
then
if
[
$(
echo
`
uname
`
|
grep
"Win"
)
!=
""
]
;
then
# -----simple_on_word2vec on windows-----
cmake ..
-G
"Visual Studio 14 2015"
-A
x64
-DPADDLE_LIB
=
"
${
inference_install_dir
}
"
\
-DWITH_MKL
=
"
$TURN_ON_MKL
"
\
cmake ..
-G
"Visual Studio 14 2015"
-A
x64
-DPADDLE_LIB
=
${
inference_install_dir
}
\
-DWITH_MKL
=
$TURN_ON_MKL
\
-DDEMO_NAME
=
simple_on_word2vec
\
-DWITH_GPU
=
"
$TEST_GPU_CPU
"
\
-DWITH_STATIC_LIB
=
"
$WITH_STATIC_LIB
"
\
-DMSVC_STATIC_CRT
=
"
$MSVC_STATIC_CRT
"
-DWITH_GPU
=
$TEST_GPU_CPU
\
-DWITH_STATIC_LIB
=
$WITH_STATIC_LIB
\
-DMSVC_STATIC_CRT
=
$MSVC_STATIC_CRT
msbuild /maxcpucount /property:Configuration
=
Release cpp_inference_demo.sln
for
use_gpu
in
$use_gpu_list
;
do
Release/simple_on_word2vec.exe
\
--dirname
=
"
$DATA_DIR
"
/word2vec/word2vec.inference.model
\
--use_gpu
=
"
$use_gpu
"
EXCODE
=
"
$?
"
if
[
"
$EXCODE
"
-ne
0
]
;
then
--dirname
=
$DATA_DIR
/word2vec/word2vec.inference.model
\
--use_gpu
=
$use_gpu
if
[
$?
-ne
0
]
;
then
echo
"simple_on_word2vec demo runs fail."
exit
1
fi
done
# -----vis_demo on windows-----
rm
-rf
./
*
cmake ..
-G
"Visual Studio 14 2015"
-A
x64
-DPADDLE_LIB
=
"
${
inference_install_dir
}
"
\
-DWITH_MKL
=
"
$TURN_ON_MKL
"
\
rm
-rf
*
cmake ..
-G
"Visual Studio 14 2015"
-A
x64
-DPADDLE_LIB
=
${
inference_install_dir
}
\
-DWITH_MKL
=
$TURN_ON_MKL
\
-DDEMO_NAME
=
vis_demo
\
-DWITH_GPU
=
"
$TEST_GPU_CPU
"
\
-DWITH_STATIC_LIB
=
"
$WITH_STATIC_LIB
"
\
-DMSVC_STATIC_CRT
=
"
$MSVC_STATIC_CRT
"
-DWITH_GPU
=
$TEST_GPU_CPU
\
-DWITH_STATIC_LIB
=
$WITH_STATIC_LIB
\
-DMSVC_STATIC_CRT
=
$MSVC_STATIC_CRT
msbuild /maxcpucount /property:Configuration
=
Release cpp_inference_demo.sln
for
use_gpu
in
$use_gpu_list
;
do
for
vis_demo_name
in
$vis_demo_list
;
do
Release/vis_demo.exe
\
--modeldir
=
"
$DATA_DIR
"
/
"
$vis_demo_name
"
/model
\
--data
=
"
$DATA_DIR
"
/
"
$vis_demo_name
"
/data.txt
\
--refer
=
"
$DATA_DIR
"
/
"
$vis_demo_name
"
/result.txt
\
--use_gpu
=
"
$use_gpu
"
EXCODE
=
"
$?
"
if
[
"
$EXCODE
"
-ne
0
]
;
then
--modeldir
=
$DATA_DIR
/
$vis_demo_name
/model
\
--data
=
$DATA_DIR
/
$vis_demo_name
/data.txt
\
--refer
=
$DATA_DIR
/
$vis_demo_name
/result.txt
\
--use_gpu
=
$use_gpu
if
[
$?
-ne
0
]
;
then
echo
"vis demo
$vis_demo_name
runs fail."
exit
1
fi
...
...
@@ -112,66 +110,63 @@ for WITH_STATIC_LIB in ON OFF; do
done
else
# -----simple_on_word2vec on linux/mac-----
rm
-rf
./
*
cmake ..
-DPADDLE_LIB
=
"
${
inference_install_dir
}
"
\
-DWITH_MKL
=
"
$TURN_ON_MKL
"
\
rm
-rf
*
cmake ..
-DPADDLE_LIB
=
${
inference_install_dir
}
\
-DWITH_MKL
=
$TURN_ON_MKL
\
-DDEMO_NAME
=
simple_on_word2vec
\
-DWITH_GPU
=
"
$TEST_GPU_CPU
"
\
-DWITH_STATIC_LIB
=
"
$WITH_STATIC_LIB
"
make
-j
"
$(
nproc
)
"
word2vec_model
=
"
$DATA_DIR
"
'/word2vec/word2vec.inference.model'
if
[
-d
"
$word2vec_model
"
]
;
then
-DWITH_GPU
=
$TEST_GPU_CPU
\
-DWITH_STATIC_LIB
=
$WITH_STATIC_LIB
make
-j
$(
nproc
)
word2vec_model
=
$DATA_DIR
'/word2vec/word2vec.inference.model'
if
[
-d
$word2vec_model
]
;
then
for
use_gpu
in
$use_gpu_list
;
do
./simple_on_word2vec
\
--dirname
=
"
$DATA_DIR
"
/word2vec/word2vec.inference.model
\
--use_gpu
=
"
$use_gpu
"
EXCODE
=
"
$?
"
if
[
"
$EXCODE
"
-ne
0
]
;
then
--dirname
=
$DATA_DIR
/word2vec/word2vec.inference.model
\
--use_gpu
=
$use_gpu
if
[
$?
-ne
0
]
;
then
echo
"simple_on_word2vec demo runs fail."
exit
1
fi
done
fi
# ---------vis_demo on linux/mac---------
rm
-rf
./
*
cmake ..
-DPADDLE_LIB
=
"
${
inference_install_dir
}
"
\
-DWITH_MKL
=
"
$TURN_ON_MKL
"
\
rm
-rf
*
cmake ..
-DPADDLE_LIB
=
${
inference_install_dir
}
\
-DWITH_MKL
=
$TURN_ON_MKL
\
-DDEMO_NAME
=
vis_demo
\
-DWITH_GPU
=
"
$TEST_GPU_CPU
"
\
-DWITH_STATIC_LIB
=
"
$WITH_STATIC_LIB
"
make
-j
"
$(
nproc
)
"
-DWITH_GPU
=
$TEST_GPU_CPU
\
-DWITH_STATIC_LIB
=
$WITH_STATIC_LIB
make
-j
$(
nproc
)
for
use_gpu
in
$use_gpu_list
;
do
for
vis_demo_name
in
$vis_demo_list
;
do
./vis_demo
\
--modeldir
=
"
$DATA_DIR
"
/
"
$vis_demo_name
"
/model
\
--data
=
"
$DATA_DIR
"
/
"
$vis_demo_name
"
/data.txt
\
--refer
=
"
$DATA_DIR
"
/
"
$vis_demo_name
"
/result.txt
\
--use_gpu
=
"
$use_gpu
"
EXCODE
=
"
$?
"
if
[
"
$EXCODE
"
-ne
0
]
;
then
--modeldir
=
$DATA_DIR
/
$vis_demo_name
/model
\
--data
=
$DATA_DIR
/
$vis_demo_name
/data.txt
\
--refer
=
$DATA_DIR
/
$vis_demo_name
/result.txt
\
--use_gpu
=
$use_gpu
if
[
$?
-ne
0
]
;
then
echo
"vis demo
$vis_demo_name
runs fail."
exit
1
fi
done
done
# --------tensorrt mobilenet on linux/mac------
if
[
"
$USE_TENSORRT
"
==
ON
]
&&
[
"
$TEST_GPU_CPU
"
==
ON
]
;
then
rm
-rf
./
*
cmake ..
-DPADDLE_LIB
=
"
${
inference_install_dir
}
"
\
-DWITH_MKL
=
"
$TURN_ON_MKL
"
\
if
[
$USE_TENSORRT
==
ON
-a
$TEST_GPU_CPU
==
ON
]
;
then
rm
-rf
*
cmake ..
-DPADDLE_LIB
=
${
inference_install_dir
}
\
-DWITH_MKL
=
$TURN_ON_MKL
\
-DDEMO_NAME
=
trt_mobilenet_demo
\
-DWITH_GPU
=
"
$TEST_GPU_CPU
"
\
-DWITH_STATIC_LIB
=
"
$WITH_STATIC_LIB
"
\
-DUSE_TENSORRT
=
"
$USE_TENSORRT
"
\
-DTENSORRT_INCLUDE_DIR
=
"
$TENSORRT_INCLUDE_DIR
"
\
-DTENSORRT_LIB_DIR
=
"
$TENSORRT_LIB_DIR
"
make
-j
"
$(
nproc
)
"
-DWITH_GPU
=
$TEST_GPU_CPU
\
-DWITH_STATIC_LIB
=
$WITH_STATIC_LIB
\
-DUSE_TENSORRT
=
$USE_TENSORRT
\
-DTENSORRT_INCLUDE_DIR
=
$TENSORRT_INCLUDE_DIR
\
-DTENSORRT_LIB_DIR
=
$TENSORRT_LIB_DIR
make
-j
$(
nproc
)
./trt_mobilenet_demo
\
--modeldir
=
"
$DATA_DIR
"
/mobilenet/model
\
--data
=
"
$DATA_DIR
"
/mobilenet/data.txt
\
--refer
=
"
$DATA_DIR
"
/mobilenet/result.txt
EXCODE
=
"
$?
"
if
[
"
$EXCODE
"
!=
0
]
;
then
--modeldir
=
$DATA_DIR
/mobilenet/model
\
--data
=
$DATA_DIR
/mobilenet/data.txt
\
--refer
=
$DATA_DIR
/mobilenet/result.txt
if
[
$?
-ne
0
]
;
then
echo
"trt demo trt_mobilenet_demo runs fail."
exit
1
fi
...
...
paddle/fluid/inference/check_symbol.sh
浏览文件 @
ba0fe0a8
#!/bin/sh
lib
=
"
$1
"
if
[
"$#"
-ne
1
]
;
then
echo
"No input library"
;
exit
1
;
fi
lib
=
$1
if
[
$#
-ne
1
]
;
then
echo
"No input library"
;
exit
-
1
;
fi
num_paddle_syms
=
$(
nm
-D
"
${
lib
}
"
|
grep
-c
paddle
)
num_google_syms
=
$(
nm
-D
"
${
lib
}
"
|
grep
google |
grep
-v
paddle |
grep
-c
"T "
)
num_paddle_syms
=
$(
nm
-D
${
lib
}
|
grep
paddle |
wc
-l
)
num_google_syms
=
$(
nm
-D
${
lib
}
|
grep
google |
grep
-v
paddle |
grep
"T "
|
wc
-l
)
if
[
"
$num_paddle_syms
"
-le
0
]
;
then
echo
"Have no paddle symbols"
;
exit
1
;
fi
if
[
"
$num_google_syms
"
-ge
1
]
;
then
echo
"Have some google symbols"
;
exit
1
;
fi
if
[
$num_paddle_syms
-le
0
]
;
then
echo
"Have no paddle symbols"
;
exit
-
1
;
fi
if
[
$num_google_syms
-ge
1
]
;
then
echo
"Have some google symbols"
;
exit
-
1
;
fi
exit
0
paddle/fluid/train/demo/clean.sh
浏览文件 @
ba0fe0a8
...
...
@@ -15,6 +15,6 @@
# limitations under the License.
set
-x
cd
"
$(
dirname
"
$0
"
)
"
||
exit
cd
"
$(
dirname
"
$0
"
)
"
rm
-rf
build/
set
+x
paddle/fluid/train/demo/run.sh
浏览文件 @
ba0fe0a8
...
...
@@ -14,14 +14,14 @@ function download() {
download
# build demo trainer
paddle_install_dir
=
"
${
PADDLE_ROOT
}
"
/build/paddle_install_dir
paddle_install_dir
=
${
PADDLE_ROOT
}
/build/paddle_install_dir
mkdir
-p
build
cd
build
||
exit
rm
-rf
./
*
cmake ..
-DPADDLE_LIB
=
"
$paddle_install_dir
"
\
-DWITH_MKLDNN
=
"
$TURN_ON_MKL
"
\
-DWITH_MKL
=
"
$TURN_ON_MKL
"
cd
build
rm
-rf
*
cmake ..
-DPADDLE_LIB
=
$paddle_install_dir
\
-DWITH_MKLDNN
=
$TURN_ON_MKL
\
-DWITH_MKL
=
$TURN_ON_MKL
make
cd
..
...
...
paddle/fluid/train/imdb_demo/run.sh
浏览文件 @
ba0fe0a8
#!/bin/bash
set
-exu
build/demo_trainer
--flagfile
=
"train.cfg"
paddle/scripts/paddle_docker_build.sh
浏览文件 @
ba0fe0a8
...
...
@@ -15,14 +15,14 @@
# limitations under the License.
function
start_build_docker
()
{
docker pull
"
$IMG
"
docker pull
$IMG
apt_mirror
=
's#http://archive.ubuntu.com/ubuntu#mirror://mirrors.ubuntu.com/mirrors.txt#g'
DOCKER_ENV
=
$(
cat
<<
EOL
-e FLAGS_fraction_of_gpu_memory_to_use=0.15
\
-e CTEST_OUTPUT_ON_FAILURE=1
\
-e CTEST_PARALLEL_LEVEL=1
\
-e APT_MIRROR=
"
${
apt_mirror
}
"
\
-e APT_MIRROR=
${
apt_mirror
}
\
-e WITH_GPU=ON
\
-e CUDA_ARCH_NAME=Auto
\
-e WITH_AVX=ON
\
...
...
@@ -39,24 +39,24 @@ EOL
)
DOCKER_CMD
=
"nvidia-docker"
if
!
[
-x
"
$(
command
-v
"
${
DOCKER_CMD
}
"
)
"
]
;
then
if
!
[
-x
"
$(
command
-v
${
DOCKER_CMD
}
)
"
]
;
then
DOCKER_CMD
=
"docker"
fi
if
[
!
-d
"
${
HOME
}
/.ccache"
]
;
then
mkdir
"
${
HOME
}
"
/.ccache
mkdir
${
HOME
}
/.ccache
fi
set
-ex
"
${
DOCKER_CMD
}
"
run
-it
\
"
${
DOCKER_ENV
}
"
\
-e
SCRIPT_NAME
=
"
$0
"
\
-e
CONTENT_DEC_PASSWD
=
"
$CONTENT_DEC_PASSWD
"
\
-e
TRAVIS_BRANCH
=
"
$TRAVIS_BRANCH
"
\
-e
TRAVIS_PULL_REQUEST
=
"
$TRAVIS_PULL_REQUEST
"
\
-v
"
$PADDLE_ROOT
"
:/paddle
\
-v
"
${
HOME
}
"
/.ccache:/root/.ccache
\
${
DOCKER_CMD
}
run
-it
\
${
DOCKER_ENV
}
\
-e
SCRIPT_NAME
=
$0
\
-e
CONTENT_DEC_PASSWD
=
$CONTENT_DEC_PASSWD
\
-e
TRAVIS_BRANCH
=
$TRAVIS_BRANCH
\
-e
TRAVIS_PULL_REQUEST
=
$TRAVIS_PULL_REQUEST
\
-v
$PADDLE_ROOT
:/paddle
\
-v
${
HOME
}
/.ccache:/root/.ccache
\
-w
/paddle
\
"
$IMG
"
\
paddle/scripts/paddle_build.sh
"
$@
"
$IMG
\
paddle/scripts/paddle_build.sh
$@
set
+x
}
...
...
@@ -65,7 +65,7 @@ function main() {
VERSION
=
"latest-dev"
PADDLE_ROOT
=
"
$(
cd
"
$(
dirname
"
${
BASH_SOURCE
[0]
}
"
)
/../../"
&&
pwd
)
"
IMG
=
${
DOCKER_REPO
}
:
${
VERSION
}
start_build_docker
"
$@
"
start_build_docker
$@
}
main
"
$@
"
main
$@
tools/cudaError/start.sh
浏览文件 @
ba0fe0a8
#!/usr/bin/env bash
set
-ex
SYSTEM
=
"
$(
uname
-s
)
"
SYSTEM
=
`
uname
-s
`
rm
-f
protoc-3.11.3-linux-x86_64.
*
if
[
"
$SYSTEM
"
==
"Linux"
]
;
then
wget
--no-check-certificate
https://github.com/protocolbuffers/protobuf/releases/download/v3.11.3/protoc-3.11.3-linux-x86_64.zip
...
...
@@ -28,5 +28,5 @@ if [ "$1" != "" ]; then
fi
fi
python spider.py
--version
=
$version
--url
=
"
$url
"
python spider.py
--version
=
$version
--url
=
$url
tar
czf cudaErrorMessage.tar.gz cudaErrorMessage.pb
tools/dockerfile/build_scripts/install_nccl2.sh
浏览文件 @
ba0fe0a8
...
...
@@ -24,8 +24,8 @@ wget -q -O $DIR/$DEB $URL
cd
$DIR
&&
ar x
$DEB
&&
tar
xf data.tar.xz
DEBS
=
$(
find ./var/
-name
"*.deb"
)
for
sub_deb
in
$DEBS
;
do
echo
"
$sub_deb
"
ar x
"
$sub_deb
"
&&
tar
xf data.tar.xz
echo
$sub_deb
ar x
$sub_deb
&&
tar
xf data.tar.xz
done
mv
-f
usr/include/nccl.h /usr/local/include/
mv
-f
usr/lib/x86_64-linux-gnu/libnccl
*
/usr/local/lib/
...
...
tools/gen_alias_mapping.sh
浏览文件 @
ba0fe0a8
...
...
@@ -31,9 +31,9 @@
# <real API implement>\t<API recommend>,<API other alias name1>,<API other alias name2>,...
PADDLE_ROOT
=
"
$(
dirname
"
$(
readlink
-f
"
${
BASH_SOURCE
[0]
}
"
)
"
)
/.."
PADDLE_ROOT
=
"
$(
dirname
$(
readlink
-f
${
BASH_SOURCE
[0]
}
)
)
/.."
find
"
${
PADDLE_ROOT
}
"
/python/
-name
'*.py'
\
find
${
PADDLE_ROOT
}
/python/
-name
'*.py'
\
| xargs
grep
-v
'^#'
\
|
grep
'DEFINE_ALIAS'
\
| perl
-ne
'
...
...
tools/manylinux1/build_scripts/install_nccl2.sh
浏览文件 @
ba0fe0a8
#!/bin/bash
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
VERSION
=
$(
nvcc
--version
|
grep
release |
grep
-oEi
"release ([0-9]+)
\.
([0-9])"
|
sed
"s/release //"
)
if
[
"
$VERSION
"
==
"10.0"
]
;
then
DEB
=
"nccl-repo-ubuntu1604-2.4.7-ga-cuda10.0_1-1_amd64.deb"
...
...
@@ -39,10 +24,10 @@ wget -q -O $DIR/$DEB $URL
cd
$DIR
&&
ar x
$DEB
&&
tar
xf data.tar.xz
DEBS
=
$(
find ./var/
-name
"*.deb"
)
for
sub_deb
in
$DEBS
;
do
echo
"
$sub_deb
"
ar x
"
$sub_deb
"
&&
tar
xf data.tar.xz
echo
$sub_deb
ar x
$sub_deb
&&
tar
xf data.tar.xz
done
mv
-f
usr/include/nccl.h /usr/local/include/
mv
-f
usr/lib/x86_64-linux-gnu/libnccl
*
/usr/local/lib/
rm
/usr/include/nccl.h
rm
-rf
"
$DIR
"
rm
-rf
$DIR
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录