mac下run_train.sh内存占用持续增加,有泄漏?
Created by: kvinwang
我想用自己的语料来训练DeepSpeech, 训练过程系统内存占用持续增高,直至交换文件把磁盘撑爆。 但是python进程本身的内存占用又没有增加,不知道是哪里吃的内存。
(paddle)loong@MacBook-Pro:~/l/lab/py/ml/baidu/wav on master$ sh run_train.sh
----------- Configuration Arguments -----------
augment_conf_path: arg.config
batch_size: 4
dev_manifest: data/manifest.train
init_model_path: None
is_local: 1
learning_rate: 5e-05
max_duration: 27.0
mean_std_path: data/mean_std.npz
min_duration: 0.0
num_conv_layers: 2
num_iter_print: 100
num_passes: 40
num_proc_data: 16
num_rnn_layers: 3
output_model_dir: ./models
rnn_layer_size: 1024
share_rnn_weights: 0
shuffle_method: batch_shuffle_clipped
specgram_type: linear
test_off: 0
train_manifest: data/manifest.train
trainer_count: 1
use_gpu: 0
use_gru: 0
use_sortagrad: 1
vocab_path: data/vocab.txt
------------------------------------------------
I0202 21:40:14.468683 2907198400 Util.cpp:166] commandline: --use_gpu=0 --rnn_use_batch=True --log_clipping=True --trainer_count=1
[INFO 2018-02-02 21:40:14,484 layers.py:2689] output for __conv_0__: c = 32, h = 81, w = 54, size = 139968
[INFO 2018-02-02 21:40:14,485 layers.py:3251] output for __batch_norm_0__: c = 32, h = 81, w = 54, size = 139968
[INFO 2018-02-02 21:40:14,487 layers.py:7409] output for __scale_sub_region_0__: c = 32, h = 81, w = 54, size = 139968
[INFO 2018-02-02 21:40:14,488 layers.py:2689] output for __conv_1__: c = 32, h = 41, w = 54, size = 70848
[INFO 2018-02-02 21:40:14,490 layers.py:3251] output for __batch_norm_1__: c = 32, h = 41, w = 54, size = 70848
[INFO 2018-02-02 21:40:14,493 layers.py:7409] output for __scale_sub_region_1__: c = 32, h = 41, w = 54, size = 70848
I0202 21:40:14.751354 2907198400 GradientMachine.cpp:94] Initing parameters..
I0202 21:40:15.287204 2907198400 GradientMachine.cpp:101] Init parameters done.