@@ -33,46 +33,40 @@ We use Bleu and Rouge as evaluation metrics, the calculation of these metrics re
```
cd utils && bash download_thirdparty.sh
```
#### Environment Requirements
For now we've only tested on PaddlePaddle v1.0, to install PaddlePaddle and for more details about PaddlePaddle, see [PaddlePaddle Homepage](http://paddlepaddle.org).
### Preprocess the Data
After the dataset is downloaded, there is still some work to do to run DuReader. DuReader dataset offers rich amount of documents for every user question, the documents are too long for popular RC models to cope with. In our model, we preprocess the train set and development set data by selecting the paragraph that is most related to the answer string, while for inferring(no available golden answer), we select the paragraph that is most related to the question string. The preprocessing strategy is implemented in `utils/preprocess.py`. To preprocess the raw data, you should first segment 'question', 'title', 'paragraphs' and then store the segemented result into 'segmented_question', 'segmented_title', 'segmented_paragraphs' like the downloaded preprocessed data, then run:
The preprocessed data can be automatically downloaded by `data/download.sh`, and is stored in `data/preprocessed`, the raw data before preprocessing is under `data/raw`.
#### Get the Vocab File
#### Preparation
Before training the model, we have to make sure that the data is ready. For preparation, we will check the data files, make directories and extract a vocabulary for later use. You can run the following command to do this with a specified task name:
Once the preprocessed data is ready, you can run `utils/get_vocab.py` to generate the vocabulary file, for example, if you want to train model with Baidu Search data:
You can specify the files for train/dev/test by setting the `trainset`/`devset`/`testset`.
#### Training
To train the model and you can also set the hyper-parameters such as the learning rate by using `--learning_rate NUM`. For example, to train the model for 10 passes, you can run:
For now we've only tested on PaddlePaddle v1.0, to install PaddlePaddle and for more details about PaddlePaddle, see [PaddlePaddle Homepage](http://paddlepaddle.org).
The training process includes an evaluation on the dev set after each training epoch. By default, the model with the least Bleu-4 score on the dev set will be saved.
#### Training
The DuReader model can be trained by run `run.py`, for complete usage run `python run.py -h`.
#### Evaluation
To conduct a single evaluation on the dev set with the the model already trained, you can run the following command:
The basic training and infering process has been wrapped in `run.sh`, the basic usage is:
```
bash run.sh --TASK_NAME
```
For example, to train the model, run:
```
bash run.sh --train
sh run.sh --evaluate --load_dir models/1
```
#### Inference
To infer a trained model, run the same command as training and change `train` to `infer`, and add `--testset <path_to_testset>` argument. for example, suppose a model is successfully trained and parameters of the model are saved in a directory such as `models/1`, to infer the saved model, run:
#### Prediction
You can also predict answers for the samples in some files using the following command: