Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
Pinoxchio
apollo
提交
e18a591e
A
apollo
项目概览
Pinoxchio
/
apollo
与 Fork 源项目一致
从无法访问的项目Fork
通知
2
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
A
apollo
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
体验新版 GitCode,发现更多精彩内容 >>
提交
e18a591e
编写于
7月 04, 2020
作者:
S
storypku
提交者:
Liu Jiaming
7月 04, 2020
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Docker: exclude tensorrt from nvidia-ml-jetson
上级
b83feb56
变更
3
显示空白变更内容
内联
并排
Showing
3 changed file
with
87 addition
and
24 deletion
+87
-24
docker/build/installers/install_nvidia_ml_for_jetson.sh
docker/build/installers/install_nvidia_ml_for_jetson.sh
+17
-5
docker/build/installers/install_tensorrt.sh
docker/build/installers/install_tensorrt.sh
+46
-2
docker/build/tegra_cyber.aarch64.dockerfile
docker/build/tegra_cyber.aarch64.dockerfile
+24
-17
未找到文件。
docker/build/installers/install_nvidia_ml_for_jetson.sh
浏览文件 @
e18a591e
...
...
@@ -22,6 +22,8 @@ cd "$(dirname "${BASH_SOURCE[0]}")"
# Fail on first error.
set
-e
TENSORRT_NEEDED
=
0
##===== Install CUDA 10.2 =====##
VERSION_1
=
"10-2"
VERSION_2
=
"10.2.89"
...
...
@@ -66,19 +68,29 @@ for pkg in ${CUDNN_PKGS}; do
done
for
pkg
in
${
CUDNN_PKGS
}
;
do
sudo
dpkg
-i
"
${
pkg
}
"
dpkg
-i
"
${
pkg
}
"
rm
-rf
${
pkg
}
done
info
"Successfully installed CUDNN
${
MAJOR
}
"
#
#===== Install TensorRT 7 =====##
#
Install pre-reqs for TensorRT from local CUDA repo
# Install PreReqs from local CUDA repo
apt-get
-y
install
\
libcublas10
\
libcublas-dev
# Kick the ladder and cleanup
apt-get
-y
purge
"cuda-repo-l4t-
${
VERSION_1
}
-local-
${
VERSION_2
}
"
apt-get clean
&&
\
rm
-rf
/var/lib/apt/lists/
*
if
[
"
${
TENSORRT_NEEDED
}
"
-eq
0
]
;
then
exit
0
fi
CUDA_VER
=
"10.2"
TRT_VER1
=
"7.1.0-1"
MAJOR
=
"
${
TRT_VER1
%%.*
}
"
TRT_VERSION
=
"
${
TRT_VER1
}
+cuda
${
CUDA_VER
}
"
...
...
@@ -107,7 +119,7 @@ dpkg -i ${TRT_PKGS}
info
"Successfully installed TensorRT
${
MAJOR
}
"
# Kick the ladder and cleanup
apt-get
-y
purge
"cuda-repo-l4t-
${
VERSION_1
}
-local-
${
VERSION_2
}
"
rm
-rf
${
TRT_PKGS
}
apt-get clean
&&
\
rm
-rf
/var/lib/apt/lists/
*
docker/build/installers/install_tensorrt.sh
浏览文件 @
e18a591e
...
...
@@ -23,6 +23,44 @@ cd "$(dirname "${BASH_SOURCE[0]}")"
.
/tmp/installers/installer_base.sh
ARCH
=
"
$(
uname
-m
)
"
if
[
"
${
ARCH
}
"
=
"aarch64"
]
;
then
CUDA_VER
=
"10.2"
TRT_VER1
=
"7.1.0-1"
MAJOR
=
"
${
TRT_VER1
%%.*
}
"
TRT_VERSION
=
"
${
TRT_VER1
}
+cuda
${
CUDA_VER
}
"
TRT_PKGS
=
"
\
libnvinfer
${
MAJOR
}
_
${
TRT_VERSION
}
_arm64.deb
\
libnvinfer-bin_
${
TRT_VERSION
}
_arm64.deb
\
libnvinfer-dev_
${
TRT_VERSION
}
_arm64.deb
\
libnvinfer-plugin
${
MAJOR
}
_
${
TRT_VERSION
}
_arm64.deb
\
libnvinfer-plugin-dev_
${
TRT_VERSION
}
_arm64.deb
\
libnvonnxparsers
${
MAJOR
}
_
${
TRT_VERSION
}
_arm64.deb
\
libnvonnxparsers-dev_
${
TRT_VERSION
}
_arm64.deb
\
libnvparsers
${
MAJOR
}
_
${
TRT_VERSION
}
_arm64.deb
\
libnvparsers-dev_
${
TRT_VERSION
}
_arm64.deb
\
"
# tensorrt_7.1.0.16-1+cuda10.2_arm64.deb
# libnvinfer-doc_${TRT_VERSION}_all.deb
# libnvinfer-samples_${TRT_VERSION}_all.deb
for
pkg
in
${
TRT_PKGS
}
;
do
info
"Downloading
${
LOCAL_HTTP_ADDR
}
/
${
pkg
}
"
wget
"
${
LOCAL_HTTP_ADDR
}
/
${
pkg
}
"
done
dpkg
-i
${
TRT_PKGS
}
info
"Successfully installed TensorRT
${
MAJOR
}
"
rm
-rf
${
TRT_PKGS
}
apt-get clean
exit
0
fi
#Install the TensorRT package that fits your particular needs.
#For only running TensorRT C++ applications:
#sudo apt-get install libnvinfer7 libnvonnxparsers7 libnvparsers7 libnvinfer-plugin7
...
...
@@ -59,9 +97,15 @@ else
libnvinfer-plugin-dev
fi
# Make caffe-1.0 compilation pass
# FIXME(all):
# Previously soft sym-linked for successful caffe-1.0 compilation.
# Now that caffe-1.0 is retired, do we still need this?
# Minor changes required:
# 1) cudnn major version hard-code fix: v7,v8,...
# 2) move to cudnn installer section
CUDNN_HEADER_DIR
=
"/usr/include/
$(
uname
-m
)
-linux-gnu"
[
[
-e
"
${
CUDNN_HEADER_DIR
}
/cudnn.h"
]
]
||
\
[
-e
"
${
CUDNN_HEADER_DIR
}
/cudnn.h"
]
||
\
ln
-s
"
${
CUDNN_HEADER_DIR
}
/cudnn_v7.h"
"
${
CUDNN_HEADER_DIR
}
/cudnn.h"
# Disable nvidia apt sources.list settings to speed up build process
...
...
docker/build/tegra_cyber.aarch64.dockerfile
浏览文件 @
e18a591e
...
...
@@ -7,22 +7,29 @@ ARG INSTALL_MODE
LABEL
version="1.2"
ENV
DEBIAN_FRONTEND=noninteractive
ENV
PATH /opt/apollo/sysroot/bin:$PATH
COPY
installers /tmp/installers
COPY
rcfiles /opt/apollo/rcfiles
# Pre-downloaded tarballs
COPY
archive /tmp/archive
RUN
bash /tmp/installers/install_minimal_environment.sh
${
GEOLOC
}
RUN
bash /tmp/installers/install_bazel.sh
RUN
bash /tmp/installers/install_cmake.sh
${
INSTALL_MODE
}
RUN
bash /tmp/installers/install_llvm_clang.sh
RUN
bash /tmp/installers/install_cyber_deps.sh
RUN
bash /tmp/installers/install_qa_tools.sh
RUN
bash /tmp/installers/install_visualizer_deps.sh
RUN
bash /tmp/installers/post_install.sh
${
BUILD_STAGE
}
ENV
PATH /opt/apollo/sysroot/bin:$PATH
WORKDIR
/apollo
COPY
installers/installer_base.sh /tmp/installers/
COPY
installers/install_nvidia_ml_for_jetson.sh /tmp/installers/
COPY
rcfiles/apollo.sh.sample /opt/apollo/rcfiles/
RUN
bash /tmp/installers/install_nvidia_ml_for_jetson.sh
#COPY installers /tmp/installers
#COPY rcfiles /opt/apollo/rcfiles
## Pre-downloaded tarballs
#COPY archive /tmp/archive
#
#RUN bash /tmp/installers/install_minimal_environment.sh ${GEOLOC}
#RUN bash /tmp/installers/install_bazel.sh
#RUN bash /tmp/installers/install_cmake.sh ${INSTALL_MODE}
#RUN bash /tmp/installers/install_llvm_clang.sh
#
#RUN bash /tmp/installers/install_cyber_deps.sh
#RUN bash /tmp/installers/install_qa_tools.sh
#
#RUN bash /tmp/installers/install_visualizer_deps.sh
#RUN bash /tmp/installers/post_install.sh ${BUILD_STAGE}
#
#WORKDIR /apollo
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录