The counterpart of **`Anakin`** is the acknowledged high performance inference engine **`NVIDIA TensorRT 3`** , The models which TensorRT 3 doesn't support we use the custom plugins to support.
## Benchmark Model
> 注意在性能测试之前,请先将测试model通过 `External Converter` 工具转换为Anakin model
> 对这些model,本文在GPU上进行单线程单GPU卡的性能测试。
The following convolutional neural networks are tested with both `Anakin` and `TenorRT3`.
You can use pretrained caffe model or the model trained by youself.
-[Vgg16](#1)*caffe model 可以在[这儿](https://gist.github.com/jimmie33/27c1c0a7736ba66c2395)下载*
-[Yolo](#2)*caffe model 可以在[这儿](https://github.com/hojel/caffe-yolo-model)下载*
-[Resnet50](#3)*caffe model 可以在[这儿](https://github.com/KaimingHe/deep-residual-networks#models)下载*
-[Resnet101](#4)*caffe model 可以在[这儿](https://github.com/KaimingHe/deep-residual-networks#models)下载*
-[Mobilenet v1](#5)*caffe model 可以在[这儿](https://github.com/shicai/MobileNet-Caffe)下载*
-[Mobilenet v2](#6)*caffe model 可以在[这儿](https://github.com/shicai/MobileNet-Caffe)下载*
-[RNN](#7)*暂不支持*
> Please note that you should transform caffe model or others into anakin model with the help of [`external converter ->`](../docs/Manual/Converter_en.md)
-[Vgg16](#1)*caffe model can be found [here->](https://gist.github.com/jimmie33/27c1c0a7736ba66c2395)*
-[Yolo](#2)*caffe model can be found [here->](https://github.com/hojel/caffe-yolo-model)*
-[Resnet50](#3)*caffe model can be found [here->](https://github.com/KaimingHe/deep-residual-networks#models)*
-[Resnet101](#4)*caffe model can be found [here->](https://github.com/KaimingHe/deep-residual-networks#models)*
-[Mobilenet v1](#5)*caffe model can be found [here->](https://github.com/shicai/MobileNet-Caffe)*
-[Mobilenet v2](#6)*caffe model can be found [here->](https://github.com/shicai/MobileNet-Caffe)*
-[RNN](#7)*not support yet*
We tested them on single-GPU with single-thread.
### <span id = '1'>VGG16 </span>
...
...
@@ -157,9 +162,9 @@
| 8 | 421 | 351 |
| 32 | 637 | 551 |
## How to run those Benchmark models
## How to run those Benchmark models?
1. 首先, 使用[External Converter](./convert_paddle_to_anakin.md)对caffe model 进行转换
> 1. At first, you should parse the caffe model with [`external converter`](https://github.com/PaddlePaddle/Anakin/blob/b95f31e19993a192e7428b4fcf852b9fe9860e5f/docs/Manual/Converter_en.md).
> 2. Switch to *source_root/benchmark/CNN* directory. Use 'mkdir ./models' to create ./models and put anakin models into this file.
> 3. Use command 'sh run.sh', we will create files in logs to save model log with different batch size. Finally, model latency summary will be displayed on the screen.
> 4. If you want to get more detailed information with op time, you can modify CMakeLists.txt with setting `ENABLE_OP_TIMER` to `YES`, then recompile and run. You will find detailed information in model log file.
[LayOutType](#layout)是数据分布类型,如batch x channel x height x width [NxCxHxW], 在Anakin内部用一个struct来标识
Anakin中数据类型与基本数据类型的对应如下:
1.<spanid = 'target'> TargetType </span>
TargetType是平台类型,如X86,GPU等等,在Anakin内部有相应的标识与之对应;datatype是普通的数据类型,在Anakin内部也有相应的标志与之对应;[LayOutType](#layout)是数据分布类型,如batch x channel x height x width [NxCxHxW], 在Anakin内部用一个struct来标识。 Anakin中数据类型与基本数据类型的对应如下:
Anakin TargetType | platform
:----: | :----:
NV | NVIDIA GPU
ARM | ARM
AMD | AMD GPU
X86 | X86
NVHX86 | NVIDIA GPU with Pinned Memory
1.<spanid='target'>TargetType</sapn>
2.<sapnid='datatype'> DataType </span>
Anakin TargetType | platform
:----: | :----:|
NV | NVIDIA GPU
ARM | ARM
AMD | AMD GPU
X86 | X86
NVHX86 | NVIDIA GPU with Pinned Memory
Anakin DataType | C++ | Description
:---: | :---: | :---:
AK_HALF | short | fp16
AK_FLOAT | float | fp32
AK_DOUBLE | double | fp64
AK_INT8 | char | int8
AK_INT16 | short | int16
AK_INT32 | int | int32
AK_INT64 | long | int64
AK_UINT8 | unsigned char | uint8
AK_UINT16 | unsigned short | uint8
AK_UINT32 | unsigned int | uint32
AK_STRING | std::string | /
AK_BOOL | bool | /
AK_SHAPE | / | Anakin Shape
AK_TENSOR | / | Anakin Tensor
2.<sapnid='datatype'>DataType</span>
3.<spanid = 'layout'> LayOutType </span>
Anakin DataType | C++ | Description
:---: | :---: | :---: |
AK_HALF | short | fp16
AK_FLOAT | float | fp32
AK_DOUBLE | double | fp64
AK_INT8 | char | int8
AK_INT16 | short | int16
AK_INT32 | int | int32
AK_INT64 | long | int64
AK_UINT8 | unsigned char | uint8
AK_UINT16 | unsigned short | uint8
AK_UINT32 | unsigned int | uint32
AK_STRING | std::string | /
AK_BOOL | bool | /
AK_SHAPE | / | Anakin Shape
AK_TENSOR | / | Anakin Tensor
Anakin LayOutType ( Tensor LayOut ) | Tensor Dimention | Tensor Support | Op Support