1. convert deepspeech2 model to ONNX, using Paddle2ONNX.
> We recommend using U2/U2++ model instead of DS2, please see [here](../../u2pp_ol/wenetspeech/).
2. check paddleinference and onnxruntime output equal.
3. optimize onnx model
This example demonstrate converting ds2 model to ONNX fromat.
4. check paddleinference and optimized onnxruntime output equal.
5. quantize onnx model
4. check paddleinference and optimized onnxruntime output equal.
Please make sure [Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX) and [onnx-simplifier](https://github.com/zh794390558/onnx-simplifier/tree/dyn_time_shape) version is correct.
Please make sure [Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX) and [onnx-simplifier](https://github.com/zh794390558/onnx-simplifier/tree/dyn_time_shape) version is correct.
...
@@ -25,18 +22,24 @@ onnxoptimizer 0.2.7
...
@@ -25,18 +22,24 @@ onnxoptimizer 0.2.7
onnxruntime 1.11.0
onnxruntime 1.11.0
```
```
## Using
## Using
```
```
bash run.sh --stage 0 --stop_stage 5
bash run.sh --stage 0 --stop_stage 5
```
```
1. convert deepspeech2 model to ONNX, using Paddle2ONNX.
2. check paddleinference and onnxruntime output equal.
3. optimize onnx model
4. check paddleinference and optimized onnxruntime output equal.
5. quantize onnx model
6. check paddleinference and optimized onnxruntime output equal.
For more details please see `run.sh`.
For more details please see `run.sh`.
## Outputs
## Outputs
The optimized onnx model is `exp/model.opt.onnx`, quanted model is `$exp/model.optset11.quant.onnx`.
The optimized onnx model is `exp/model.opt.onnx`, quanted model is `exp/model.optset11.quant.onnx`.
To show the graph, please using `local/netron.sh`.
A C++ deployment example for `PaddleSpeech/examples/wenetspeech/asr1` recipe. The model is static model from `export`, how to export model please see [here](../../../../examples/wenetspeech/asr1/). If you want using exported model, `run.sh` will download it, for the model link please see `run.sh`.
This example will demonstrate how to using the u2/u2++ model to recognize `wav` and compute `CER`. We using AISHELL-1 as test data.
## Testing with Aishell Test Data
## Testing with Aishell Test Data
### Download wav and model
### Source `path.sh` first
```bash
source path.sh
```
All bins are under `echo $SPEECHX_BUILD` dir.
### Download dataset and model
```
```
./run.sh --stop_stage 0
./run.sh --stop_stage 0
```
```
### compute feature
### process `cmvn` and compute feature
```
```bash
./run.sh --stage 1 --stop_stage 1
./run.sh --stage 1 --stop_stage 1
```
```
### decoding using feature
If you only want to convert `cmvn` file format, can using this cmd:
```bash
./local/feat.sh --stage 1 --stop_stage 1
```
### Decoding using `feature` input
```
```
./run.sh --stage 2 --stop_stage 2
./run.sh --stage 2 --stop_stage 2
```
```
### decoding using wav
### Decoding using `wav` input
```
```
./run.sh --stage 3 --stop_stage 3
./run.sh --stage 3 --stop_stage 3
```
```
This stage using `u2_recognizer_main` to recognize wav file.