• Z
    CHERRY_PICK: Better TensorRT support (#20858) (#21578) · 0a4002f5
    Zhaolong Xing 提交于
    * Fix TensorRT detection bug
    
    1. Add new search path for TensorRT at tensorrt.cmake
    2. Add better debug message
    3. Fix the bug of detection of TensorRT version
    
    In NVIDIA official docker image, TensorRT headers are located at
    `/usr/include/x86_64-linux-gnu` and TensorRT libraries are located
    at `/usr/lib/x86_64-linux-gnu`, so using `-DTENSORRT_ROOT` will
    fail to detect TensorRT.
    
    There is no debug/warning message to tell developer that TensorRT
    is failed to be detected.
    
    In later version of TensorRT (e.g. v6), `NV_TENSORRT_MAJOR` is
    defined at `NvInferVersion.h` instead of `NvInfer.h`, so add
    compatibility fix.
    
    * Fix TensorRT variables in CMake
    
    1. Replace `${TENSORRT_ROOT}/include` with `${TENSORRT_INCLUDE_DIR}`
    2. Replace `${TENSORRT_ROOT}/lib` with `${TENSORRT_LIBRARY}`
    
    Manually type path may locate incorrect path of TensorRT. Use the
    paths detected by system instead.
    
    * Fix TensorRT library path
    
    1. Add new variable - `${TENSORRT_LIBRARY_DIR}`
    2. Fix TensorRT library path
    
    inference_lib.cmake and setup.py.in need the path of TensorRT library
    instead of the file of TensorRT library, so add new variable to fix it.
    
    * Add more general search rule for TensoRT
    
    Let system detect architecture instead of manually assign it, so
    replace `x86_64-linux-gnu` with `${CMAKE_LIBRARY_ARCHITECTURE}`.
    
    * Add more general search rule for TensorRT
    
    Remove duplicate search rules for TensorRT libraries. Use
    `${TENSORRT_LIBRARY_DIR}` to get full path of libnvinfer.so
    
    test=release/1.6
    0a4002f5
tensorrt.cmake 2.7 KB