aishell的deploy的问题
Created by: yyhlvdl
我直接使用你们发布的aishell模型,执行python deploy/demo_server.py,然后出现了错误:
root@095d9ada1b1d:/DeepSpeech# python deploy/demo_server.py
----------- Configuration Arguments -----------
alpha: 2.15
beam_size: 500
beta: 0.35
cutoff_prob: 1.0
cutoff_top_n: 40
decoding_method: ctc_beam_search
host_ip: localhost
host_port: 8086
lang_model_path: models/lm/zh_giga.no_cna_cmn.prune01244.klm
mean_std_path: asset/preprocess/mean_std.npz
model_path: asset/train/params.tar.gz
num_conv_layers: 2
num_rnn_layers: 3
rnn_layer_size: 2048
share_rnn_weights: False
specgram_type: linear
speech_save_dir: demo_cache
use_gpu: True
use_gru: True
vocab_path: asset/preprocess/vocab.txt
warmup_manifest: asset/preprocess/test
------------------------------------------------
I1205 10:14:34.175657 15 Util.cpp:166] commandline: --use_gpu=True --trainer_count=1
[INFO 2017-12-05 10:14:35,626 layers.py:2606] output for __conv_0__: c = 32, h = 81, w = 54, size = 139968
[INFO 2017-12-05 10:14:35,626 layers.py:3133] output for __batch_norm_0__: c = 32, h = 81, w = 54, size = 139968
[INFO 2017-12-05 10:14:35,627 layers.py:7224] output for __scale_sub_region_0__: c = 32, h = 81, w = 54, size = 139968
[INFO 2017-12-05 10:14:35,627 layers.py:2606] output for __conv_1__: c = 32, h = 41, w = 54, size = 70848
[INFO 2017-12-05 10:14:35,628 layers.py:3133] output for __batch_norm_1__: c = 32, h = 41, w = 54, size = 70848
[INFO 2017-12-05 10:14:35,628 layers.py:7224] output for __scale_sub_region_1__: c = 32, h = 41, w = 54, size = 70848
-----------------------------------------------------------
Warming up ...
('Warm-up Test Case %d: %s', 0, u'asset/data/aishell/wav/test/S0765/BAC009S0765W0205.wav')
[INFO 2017-12-05 10:14:42,337 model.py:230] begin to initialize the external scorer for decoding
[INFO 2017-12-05 10:14:50,941 model.py:241] language model: is_character_based = 1, max_order = 5, dict_size = 0
[INFO 2017-12-05 10:14:50,941 model.py:242] end initializing scorer. Start decoding ...
Traceback (most recent call last):
File "deploy/demo_server.py", line 224, in <module>
main()
File "deploy/demo_server.py", line 220, in main
start_server()
File "deploy/demo_server.py", line 204, in start_server
num_test_cases=3)
File "deploy/demo_server.py", line 143, in warm_up_test
(finish_time - start_time, transcript))
UnicodeEncodeError: 'ascii' codec can't encode characters in position 40-94: ordinal not in range(128)
于是,我将transcript注释掉,重新执行,然后可以继续了。只是
[INFO 2017-12-05 10:46:41,054 model.py:230] begin to initialize the external scorer for decoding
[INFO 2017-12-05 10:46:42,193 model.py:241] language model: is_character_based = 1, max_order = 5, dict_size = 0
[INFO 2017-12-05 10:46:42,193 model.py:242] end initializing scorer. Start decoding ...
Response Time: 1174.020508
('Warm-up Test Case %d: %s', 1, u'asset/data/aishell/wav/test/S0767/BAC009S0767W0141.wav')
一个文件就需要1174s,这么长的时间,请问,有办法可以提速吗?