README.md 3.5 KB
Newer Older
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93
# PaddleDetection高性能全场景模型部署方案—FastDeploy

## 目录  
- [FastDeploy介绍](#FastDeploy介绍)  
- [PaddleDetection模型部署](#PaddleDetection模型部署)  
- [常见问题](#常见问题)  

## 1. FastDeploy介绍
<div id="FastDeploy介绍"></div>  

**[⚡️FastDeploy](https://github.com/PaddlePaddle/FastDeploy)**是一款**全场景****易用灵活****极致高效**的AI推理部署工具,支持**云边端**部署。使用FastDeploy可以简单高效的在X86 CPU、NVIDIA GPU、飞腾CPU、ARM CPU、Intel GPU、昆仑、昇腾、瑞芯微、晶晨、算能等10+款硬件上对PaddleDetection模型进行快速部署,并且支持Paddle Inference、Paddle Lite、TensorRT、OpenVINO、ONNXRuntime、RKNPU2、SOPHGO等多种推理后端。

<div align="center">

<img src="https://user-images.githubusercontent.com/31974251/224941235-d5ea4ed0-7626-4c62-8bbd-8e4fad1e72ad.png" >

</div>  

## 2. PaddleDetection模型部署  
<div id="PaddleDetection模型部署"></div>  

### 2.1 硬件支持列表

|硬件类型|该硬件是否支持|使用指南|Python|C++|
|:---:|:---:|:---:|:---:|:---:|
|X86 CPU|✅|[链接](./cpu-gpu)|✅|✅|
|NVIDIA GPU|✅|[链接](./cpu-gpu)|✅|✅|
|飞腾CPU|✅|[链接](./cpu-gpu)|✅|✅|
|ARM CPU|✅|[链接](./cpu-gpu)|✅|✅|
|Intel GPU(集成显卡)|✅|[链接](./cpu-gpu)|✅|✅|  
|Intel GPU(独立显卡)|✅|[链接](./cpu-gpu)|✅|✅|  
|昆仑|✅|[链接](./kunlunxin)|✅|✅|
|昇腾|✅|[链接](./ascend)|✅|✅|
|瑞芯微|✅|[链接](./rockchip)|✅|✅|  
|晶晨|✅|[链接](./amlogic)|-|✅|✅|  
|算能|✅|[链接](./sophgo)|✅|✅|  

### 2.2. 详细使用文档
- X86 CPU
  - [部署模型准备](./cpu-gpu)  
  - [Python部署示例](./cpu-gpu/python/)
  - [C++部署示例](./cpu-gpu/cpp/)
- NVIDIA GPU
  - [部署模型准备](./cpu-gpu)  
  - [Python部署示例](./cpu-gpu/python/)
  - [C++部署示例](./cpu-gpu/cpp/)
- 飞腾CPU
  - [部署模型准备](./cpu-gpu)  
  - [Python部署示例](./cpu-gpu/python/)
  - [C++部署示例](./cpu-gpu/cpp/)
- ARM CPU
  - [部署模型准备](./cpu-gpu)  
  - [Python部署示例](./cpu-gpu/python/)
  - [C++部署示例](./cpu-gpu/cpp/)
- Intel GPU
  - [部署模型准备](./cpu-gpu)  
  - [Python部署示例](./cpu-gpu/python/)
  - [C++部署示例](./cpu-gpu/cpp/)
- 昆仑 XPU
  - [部署模型准备](./kunlunxin)  
  - [Python部署示例](./kunlunxin/python/)
  - [C++部署示例](./kunlunxin/cpp/)
- 昇腾 Ascend
  - [部署模型准备](./ascend)  
  - [Python部署示例](./ascend/python/)
  - [C++部署示例](./ascend/cpp/)
- 瑞芯微 Rockchip
  - [部署模型准备](./rockchip/)  
  - [Python部署示例](./rockchip/rknpu2/)
  - [C++部署示例](./rockchip/rknpu2/)
- 晶晨 Amlogic
  - [部署模型准备](./amlogic/a311d/)  
  - [C++部署示例](./amlogic/a311d/cpp/)  
- 算能 Sophgo
  - [部署模型准备](./sophgo/)  
  - [Python部署示例](./sophgo/python/)
  - [C++部署示例](./sophgo/cpp/)  

### 2.3 更多部署方式

- [Android ARM CPU部署](https://github.com/PaddlePaddle/FastDeploy/tree/develop/java/android#Detection)  
- [服务化Serving部署](./serving)  
- [web部署](./web)  
- [模型自动化压缩工具](./quantize)


## 3. 常见问题
<div id="常见问题"></div>  

遇到问题可查看常见问题集合,搜索FastDeploy issue,*或给FastDeploy提交[issue](https://github.com/PaddlePaddle/FastDeploy/issues)*:

[常见问题集合](https://github.com/PaddlePaddle/FastDeploy/tree/develop/docs/cn/faq)  
[FastDeploy issues](https://github.com/PaddlePaddle/FastDeploy/issues)