Hwvideoframe is a CV preprocessing library based on cuda. The project uses GPU for image preprocessing operations. It speeds up the processing speed while increasing the utilization rate of the GPU.
## Preprocess API
Hwvideoframe provides a variety of data preprocessing methods for photo preprocess:
- class Image2Gpubuffer
-`__call__(img)`
- img(np.array):Image data.
- class Gpubuffer2Image
-`__call__(img)`
- img(np.array):Image data.
- class Div
-`__init__(value)`
- value(float):Constant value to be divided.
-`__call__(img)`
- img(np.array):Image data.
- class Sub
-`__init__(subtractor)`
- subtractor(list/float):Three 32-bit floating point channel image subtract constant. When the input is a list type, length of list must be three.
-`__call__(img)`
- img(np.array):Image data in (C,H,W) channels.
- class Normalize
-`__init__(mean,std)`
- mean(list):Mean. Length of list must be three.
- std(list):Variance. Length of list must be three.
-`__call__(img)`
- img(np.array):Image data in (C,H,W) channels.
- class CenterCrop
-`__init__(size)`
- size(int):Crops the given Image at the center while the size must not bigger than any inputs' height and width.
- size(list/int):The expected image size, when the input is a list type, it needs to contain the expected length and width. When the input is int type, the short side will be set to the length of size, and the long side will be scaled proportionally.
-`__call__(img)`
- img(numpy array):Image data in (C,H,W) channels.
## Quick start
[After compiling from code](https://github.com/PaddlePaddle/Serving/blob/develop/doc/COMPILE.md),this project will be stored in reader。
## How to Test
Test file:Serving/python/paddle_serving_app/reader/test_preprocess.py
If you use other Python version, please use the right `pip` accordingly.
...
...
@@ -123,14 +123,13 @@ Compared with CPU environment, GPU environment needs to refer to the following t
**It should be noted that the following table is used as a reference for non-Docker compilation environment. The Docker compilation environment has been configured with relevant parameters and does not need to be specified in cmake process. **
| cmake environment variable | meaning | GPU environment considerations | whether Docker environment is needed |
| CUDA_TOOLKIT_ROOT_DIR | cuda installation path, usually /usr/local/cuda | Required for all environments | No (/usr/local/cuda) |
| CUDNN_LIBRARY | The directory where libcudnn.so.* is located, usually /usr/local/cuda/lib64/ | Required for all environments | No (/usr/local/cuda/lib64/) |
| CUDA_CUDART_LIBRARY | The directory where libcudart.so.* is located, usually /usr/local/cuda/lib64/ | Required for all environments | No (/usr/local/cuda/lib64/) |
| TENSORRT_ROOT | The upper level directory of the directory where libnvinfer.so.* is located, depends on the TensorRT installation directory | Cuda 9.0/10.0 does not need, other needs | No (/usr) |
If not in Docker environment, users can refer to the following execution methods. The specific path is subject to the current environment, and the code is only for reference.
If not in Docker environment, users can refer to the following execution methods. The specific path is subject to the current environment, and the code is only for reference.TENSORRT_LIBRARY_PATH is related to the TensorRT version and should be set according to the actual situation。For example, in the cuda10.1 environment, the TensorRT version is 6.0 (/usr/local/TensorRT-6.0.1.5/targets/x86_64-linux-gnu/),In the cuda10.2 environment, the TensorRT version is 7.1 (/usr/local/TensorRT-7.1.3.4/targets/x86_64-linux-gnu/).
if your model is bert_chinese_L-12_H-768_A-12_model, replace the 'bert_seq128_model' field in the following command with 'bert_chinese_L-12_H-768_A-12_model',replace 'bert_seq128_client' with 'bert_chinese_L-12_H-768_A-12_client'.