Created by: NHZlX
- Fix TensorRT detection bug
- Add new search path for TensorRT at tensorrt.cmake
- Add better debug message
- Fix the bug of detection of TensorRT version
In NVIDIA official docker image, TensorRT headers are located at
/usr/include/x86_64-linux-gnu
and TensorRT libraries are located
at /usr/lib/x86_64-linux-gnu
, so using -DTENSORRT_ROOT
will
fail to detect TensorRT.
There is no debug/warning message to tell developer that TensorRT is failed to be detected.
In later version of TensorRT (e.g. v6), NV_TENSORRT_MAJOR
is
defined at NvInferVersion.h
instead of NvInfer.h
, so add
compatibility fix.
- Fix TensorRT variables in CMake
- Replace
${TENSORRT_ROOT}/include
with${TENSORRT_INCLUDE_DIR}
- Replace
${TENSORRT_ROOT}/lib
with${TENSORRT_LIBRARY}
Manually type path may locate incorrect path of TensorRT. Use the paths detected by system instead.
- Fix TensorRT library path
- Add new variable -
${TENSORRT_LIBRARY_DIR}
- Fix TensorRT library path
inference_lib.cmake and setup.py.in need the path of TensorRT library instead of the file of TensorRT library, so add new variable to fix it.
- Add more general search rule for TensoRT
Let system detect architecture instead of manually assign it, so
replace x86_64-linux-gnu
with ${CMAKE_LIBRARY_ARCHITECTURE}
.
- Add more general search rule for TensorRT
Remove duplicate search rules for TensorRT libraries. Use
${TENSORRT_LIBRARY_DIR}
to get full path of libnvinfer.so
test=release/1.6