提交 84bf18e7 编写于 作者: Y yangyaming

Adapt to pre-commit (0.16.0)

上级 657e66e3
...@@ -17,9 +17,11 @@ addons: ...@@ -17,9 +17,11 @@ addons:
- python-pip - python-pip
- python2.7-dev - python2.7-dev
ssh_known_hosts: 52.76.173.135 ssh_known_hosts: 52.76.173.135
before_install: before_install:
- sudo pip install -U virtualenv pre-commit pip - sudo pip install -U virtualenv pre-commit pip
- docker pull paddlepaddle/paddle:latest - docker pull paddlepaddle/paddle:latest
script: script:
- exit_code=0 - exit_code=0
- .travis/precommit.sh || exit_code=$(( exit_code | $? )) - .travis/precommit.sh || exit_code=$(( exit_code | $? ))
...@@ -34,6 +36,7 @@ script: ...@@ -34,6 +36,7 @@ script:
curl $DEPLOY_DOCS_SH | bash -s $CONTENT_DEC_PASSWD $TRAVIS_BRANCH $MODELS_DIR curl $DEPLOY_DOCS_SH | bash -s $CONTENT_DEC_PASSWD $TRAVIS_BRANCH $MODELS_DIR
exit_code=$(( exit_code | $? )) exit_code=$(( exit_code | $? ))
exit $exit_code exit $exit_code
notifications: notifications:
email: email:
on_success: change on_success: change
......
...@@ -7,51 +7,51 @@ Jonas Gehring, Micheal Auli, David Grangier, et al. Convolutional Sequence to Se ...@@ -7,51 +7,51 @@ Jonas Gehring, Micheal Auli, David Grangier, et al. Convolutional Sequence to Se
- In this tutorial, each line in a data file contains one sample and each sample consists of a source sentence and a target sentence. And the two sentences are seperated by '\t'. So, to use your own data, it should be organized as follows: - In this tutorial, each line in a data file contains one sample and each sample consists of a source sentence and a target sentence. And the two sentences are seperated by '\t'. So, to use your own data, it should be organized as follows:
``` ```
<source sentence>\t<target sentence> <source sentence>\t<target sentence>
``` ```
# Training a Model # Training a Model
- Modify the following script if needed and then run: - Modify the following script if needed and then run:
```bash ```bash
python train.py \ python train.py \
--train_data_path ./data/train_data \ --train_data_path ./data/train_data \
--test_data_path ./data/test_data \ --test_data_path ./data/test_data \
--src_dict_path ./data/src_dict \ --src_dict_path ./data/src_dict \
--trg_dict_path ./data/trg_dict \ --trg_dict_path ./data/trg_dict \
--enc_blocks "[(256, 3)] * 5" \ --enc_blocks "[(256, 3)] * 5" \
--dec_blocks "[(256, 3)] * 3" \ --dec_blocks "[(256, 3)] * 3" \
--emb_size 256 \ --emb_size 256 \
--pos_size 200 \ --pos_size 200 \
--drop_rate 0.1 \ --drop_rate 0.1 \
--use_gpu False \ --use_gpu False \
--trainer_count 1 \ --trainer_count 1 \
--batch_size 32 \ --batch_size 32 \
--num_passes 20 \ --num_passes 20 \
>train.log 2>&1 >train.log 2>&1
``` ```
# Inferring by a Trained Model # Inferring by a Trained Model
- Infer by a trained model by running: - Infer by a trained model by running:
```bash ```bash
python infer.py \ python infer.py \
--infer_data_path ./data/infer_data \ --infer_data_path ./data/infer_data \
--src_dict_path ./data/src_dict \ --src_dict_path ./data/src_dict \
--trg_dict_path ./data/trg_dict \ --trg_dict_path ./data/trg_dict \
--enc_blocks "[(256, 3)] * 5" \ --enc_blocks "[(256, 3)] * 5" \
--dec_blocks "[(256, 3)] * 3" \ --dec_blocks "[(256, 3)] * 3" \
--emb_size 256 \ --emb_size 256 \
--pos_size 200 \ --pos_size 200 \
--drop_rate 0.1 \ --drop_rate 0.1 \
--use_gpu False \ --use_gpu False \
--trainer_count 1 \ --trainer_count 1 \
--max_len 100 \ --max_len 100 \
--beam_size 1 \ --beam_size 1 \
--model_path ./params.pass-0.tar.gz \ --model_path ./params.pass-0.tar.gz \
1>infer_result 2>infer.log 1>infer_result 2>infer.log
``` ```
# Notes # Notes
......
...@@ -147,7 +147,8 @@ def encoder(token_emb, ...@@ -147,7 +147,8 @@ def encoder(token_emb,
encoded_sum = paddle.layer.addto(input=[encoded_vec, embedding]) encoded_sum = paddle.layer.addto(input=[encoded_vec, embedding])
# halve the variance of the sum # halve the variance of the sum
encoded_sum = paddle.layer.slope_intercept(input=encoded_sum, slope=math.sqrt(0.5)) encoded_sum = paddle.layer.slope_intercept(
input=encoded_sum, slope=math.sqrt(0.5))
return encoded_vec, encoded_sum return encoded_vec, encoded_sum
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册