提交 b54c993b 编写于 作者: Q qiuxuezhong

add README.md

上级 a8d519eb
...@@ -33,46 +33,40 @@ We use Bleu and Rouge as evaluation metrics, the calculation of these metrics re ...@@ -33,46 +33,40 @@ We use Bleu and Rouge as evaluation metrics, the calculation of these metrics re
``` ```
cd utils && bash download_thirdparty.sh cd utils && bash download_thirdparty.sh
``` ```
#### Environment Requirements
For now we've only tested on PaddlePaddle v1.0, to install PaddlePaddle and for more details about PaddlePaddle, see [PaddlePaddle Homepage](http://paddlepaddle.org).
### Preprocess the Data #### Preparation
After the dataset is downloaded, there is still some work to do to run DuReader. DuReader dataset offers rich amount of documents for every user question, the documents are too long for popular RC models to cope with. In our model, we preprocess the train set and development set data by selecting the paragraph that is most related to the answer string, while for inferring(no available golden answer), we select the paragraph that is most related to the question string. The preprocessing strategy is implemented in `utils/preprocess.py`. To preprocess the raw data, you should first segment 'question', 'title', 'paragraphs' and then store the segemented result into 'segmented_question', 'segmented_title', 'segmented_paragraphs' like the downloaded preprocessed data, then run: Before training the model, we have to make sure that the data is ready. For preparation, we will check the data files, make directories and extract a vocabulary for later use. You can run the following command to do this with a specified task name:
```
cat data/raw/trainset/search.train.json | python utils/preprocess.py > data/preprocessed/trainset/search.train.json
```
The preprocessed data can be automatically downloaded by `data/download.sh`, and is stored in `data/preprocessed`, the raw data before preprocessing is under `data/raw`.
#### Get the Vocab File
Once the preprocessed data is ready, you can run `utils/get_vocab.py` to generate the vocabulary file, for example, if you want to train model with Baidu Search data:
``` ```
python utils/get_vocab.py --files data/preprocessed/trainset/search.train.json data/preprocessed/devset/search.dev.json --vocab data/vocab.search sh run.sh --prepare
``` ```
You can specify the files for train/dev/test by setting the `trainset`/`devset`/`testset`.
#### Training
To train the model and you can also set the hyper-parameters such as the learning rate by using `--learning_rate NUM`. For example, to train the model for 10 passes, you can run:
If you want to use the demo data, run:
``` ```
python utils/get_vocab.py --files data/demo/trainset/search.train.json data/demo/devset/search.dev.json --vocab data/demo/vocab.search sh run.sh --train --pass_num 10
``` ```
#### Environment Requirements The training process includes an evaluation on the dev set after each training epoch. By default, the model with the least Bleu-4 score on the dev set will be saved.
For now we've only tested on PaddlePaddle v1.0, to install PaddlePaddle and for more details about PaddlePaddle, see [PaddlePaddle Homepage](http://paddlepaddle.org).
#### Training #### Evaluation
The DuReader model can be trained by run `run.py`, for complete usage run `python run.py -h`. To conduct a single evaluation on the dev set with the the model already trained, you can run the following command:
The basic training and infering process has been wrapped in `run.sh`, the basic usage is:
```
bash run.sh --TASK_NAME
```
For example, to train the model, run:
``` ```
bash run.sh --train sh run.sh --evaluate --load_dir models/1
``` ```
#### Inference
To infer a trained model, run the same command as training and change `train` to `infer`, and add `--testset <path_to_testset>` argument. for example, suppose a model is successfully trained and parameters of the model are saved in a directory such as `models/1`, to infer the saved model, run: #### Prediction
You can also predict answers for the samples in some files using the following command:
``` ```
bash run.sh --infer --testset ../data/preprocessed/testset/search.test.json --load_dir models/1 --result_dir infer sh run.sh --predict --load_dir models/1 --testset ../data/demo/devset/search.dev.json
``` ```
The result corresponding to the model saved is under `infer` folder, and the evaluation metrics is logged.
By default, the results are saved at `../data/results/` folder. You can change this by specifying `--result_dir DIR_PATH`.
## Copyright and License ## Copyright and License
Copyright 2017 Baidu.com, Inc. All Rights Reserved Copyright 2017 Baidu.com, Inc. All Rights Reserved
......
...@@ -43,6 +43,7 @@ from utils import normalize ...@@ -43,6 +43,7 @@ from utils import normalize
from utils import compute_bleu_rouge from utils import compute_bleu_rouge
from vocab import Vocab from vocab import Vocab
def prepare_batch_input(insts, args): def prepare_batch_input(insts, args):
doc_num = args.doc_num doc_num = args.doc_num
...@@ -93,7 +94,8 @@ def print_para(train_prog, train_exe, logger, args): ...@@ -93,7 +94,8 @@ def print_para(train_prog, train_exe, logger, args):
p_array = np.array(train_exe.scope.find_var(p_name).get_tensor()) p_array = np.array(train_exe.scope.find_var(p_name).get_tensor())
param_num = np.prod(p_array.shape) param_num = np.prod(p_array.shape)
num_sum = num_sum + param_num num_sum = num_sum + param_num
logger.info("param: {0}, mean={1} max={2} min={3} num={4} {5}".format( logger.info(
"param: {0}, mean={1} max={2} min={3} num={4} {5}".format(
p_name, p_name,
p_array.mean(), p_array.mean(),
p_array.max(), p_array.min(), p_array.shape, param_num)) p_array.max(), p_array.min(), p_array.shape, param_num))
...@@ -148,16 +150,16 @@ def find_best_answer(sample, start_prob, end_prob, padded_p_len, args): ...@@ -148,16 +150,16 @@ def find_best_answer(sample, start_prob, end_prob, padded_p_len, args):
return best_answer return best_answer
def validation(inference_program, avg_cost, s_probs, e_probs, feed_order, def validation(inference_program, avg_cost, s_probs, e_probs, feed_order, place,
place, vocab, brc_data, logger, args): vocab, brc_data, logger, args):
""" """
""" """
parallel_executor = fluid.ParallelExecutor( parallel_executor = fluid.ParallelExecutor(
main_program=inference_program, main_program=inference_program,
use_cuda=bool(args.use_gpu), loss_name=avg_cost.name) use_cuda=bool(args.use_gpu),
print_para(inference_program, parallel_executor, loss_name=avg_cost.name)
logger, args) print_para(inference_program, parallel_executor, logger, args)
# Use test set as validation each pass # Use test set as validation each pass
total_loss = 0.0 total_loss = 0.0
...@@ -211,7 +213,8 @@ def validation(inference_program, avg_cost, s_probs, e_probs, feed_order, ...@@ -211,7 +213,8 @@ def validation(inference_program, avg_cost, s_probs, e_probs, feed_order,
with open(result_file, 'w') as fout: with open(result_file, 'w') as fout:
for pred_answer in pred_answers: for pred_answer in pred_answers:
fout.write(json.dumps(pred_answer, ensure_ascii=False) + '\n') fout.write(json.dumps(pred_answer, ensure_ascii=False) + '\n')
logger.info('Saving {} results to {}'.format(args.result_name, result_file)) logger.info('Saving {} results to {}'.format(args.result_name,
result_file))
ave_loss = 1.0 * total_loss / count ave_loss = 1.0 * total_loss / count
...@@ -228,6 +231,7 @@ def validation(inference_program, avg_cost, s_probs, e_probs, feed_order, ...@@ -228,6 +231,7 @@ def validation(inference_program, avg_cost, s_probs, e_probs, feed_order,
bleu_rouge = None bleu_rouge = None
return ave_loss, bleu_rouge return ave_loss, bleu_rouge
def train(logger, args): def train(logger, args):
logger.info('Load data_set and vocab...') logger.info('Load data_set and vocab...')
with open(os.path.join(args.vocab_dir, 'vocab.data'), 'rb') as fin: with open(os.path.join(args.vocab_dir, 'vocab.data'), 'rb') as fin:
...@@ -243,22 +247,22 @@ def train(logger, args): ...@@ -243,22 +247,22 @@ def train(logger, args):
# build model # build model
main_program = fluid.Program() main_program = fluid.Program()
startup_prog = fluid.Program() startup_prog = fluid.Program()
main_program.random_seed=args.random_seed main_program.random_seed = args.random_seed
startup_prog.random_seed=args.random_seed startup_prog.random_seed = args.random_seed
with fluid.program_guard(main_program, startup_prog): with fluid.program_guard(main_program, startup_prog):
with fluid.unique_name.guard(): with fluid.unique_name.guard():
avg_cost, s_probs, e_probs, feed_order = rc_model.rc_model( avg_cost, s_probs, e_probs, feed_order = rc_model.rc_model(
args.hidden_size, args.hidden_size, vocab, args)
vocab,
args)
# clone from default main program and use it as the validation program # clone from default main program and use it as the validation program
inference_program = main_program.clone(for_test=True) inference_program = main_program.clone(for_test=True)
# build optimizer # build optimizer
if args.optim == 'sgd': if args.optim == 'sgd':
optimizer = fluid.optimizer.SGD(learning_rate=args.learning_rate) optimizer = fluid.optimizer.SGD(
learning_rate=args.learning_rate)
elif args.optim == 'adam': elif args.optim == 'adam':
optimizer = fluid.optimizer.Adam(learning_rate=args.learning_rate) optimizer = fluid.optimizer.Adam(
learning_rate=args.learning_rate)
elif args.optim == 'rprop': elif args.optim == 'rprop':
optimizer = fluid.optimizer.RMSPropOptimizer( optimizer = fluid.optimizer.RMSPropOptimizer(
learning_rate=args.learning_rate) learning_rate=args.learning_rate)
...@@ -272,7 +276,8 @@ def train(logger, args): ...@@ -272,7 +276,8 @@ def train(logger, args):
exe = Executor(place) exe = Executor(place)
if args.load_dir: if args.load_dir:
logger.info('load from {}'.format(args.load_dir)) logger.info('load from {}'.format(args.load_dir))
fluid.io.load_persistables(exe, args.load_dir, main_program=main_program) fluid.io.load_persistables(
exe, args.load_dir, main_program=main_program)
else: else:
exe.run(startup_prog) exe.run(startup_prog)
embedding_para = fluid.global_scope().find_var( embedding_para = fluid.global_scope().find_var(
...@@ -281,16 +286,17 @@ def train(logger, args): ...@@ -281,16 +286,17 @@ def train(logger, args):
# prepare data # prepare data
feed_list = [ feed_list = [
main_program.global_block().var(var_name) for var_name in feed_order main_program.global_block().var(var_name)
for var_name in feed_order
] ]
feeder = fluid.DataFeeder(feed_list, place) feeder = fluid.DataFeeder(feed_list, place)
logger.info('Training the model...') logger.info('Training the model...')
parallel_executor = fluid.ParallelExecutor( parallel_executor = fluid.ParallelExecutor(
main_program=main_program, main_program=main_program,
use_cuda=bool(args.use_gpu), loss_name=avg_cost.name) use_cuda=bool(args.use_gpu),
print_para(main_program, parallel_executor, loss_name=avg_cost.name)
logger, args) print_para(main_program, parallel_executor, logger, args)
for pass_id in range(1, args.pass_num + 1): for pass_id in range(1, args.pass_num + 1):
pass_start_time = time.time() pass_start_time = time.time()
...@@ -310,11 +316,12 @@ def train(logger, args): ...@@ -310,11 +316,12 @@ def train(logger, args):
n_batch_loss += cost_train n_batch_loss += cost_train
total_loss += cost_train * len(batch['raw_data']) total_loss += cost_train * len(batch['raw_data'])
if log_every_n_batch > 0 and batch_id % log_every_n_batch == 0: if log_every_n_batch > 0 and batch_id % log_every_n_batch == 0:
print_para(main_program, print_para(main_program, parallel_executor, logger,
parallel_executor, logger, args) args)
logger.info('Average loss from batch {} to {} is {}'.format( logger.info(
batch_id - log_every_n_batch + 1, batch_id, "%.10f" % ( 'Average loss from batch {} to {} is {}'.format(
n_batch_loss / log_every_n_batch))) batch_id - log_every_n_batch + 1, batch_id,
"%.10f" % (n_batch_loss / log_every_n_batch)))
n_batch_loss = 0 n_batch_loss = 0
if args.dev_interval > 0 and batch_id % args.dev_interval == 0: if args.dev_interval > 0 and batch_id % args.dev_interval == 0:
eval_loss, bleu_rouge = validation( eval_loss, bleu_rouge = validation(
...@@ -324,11 +331,12 @@ def train(logger, args): ...@@ -324,11 +331,12 @@ def train(logger, args):
logger.info('Dev eval result: {}'.format(bleu_rouge)) logger.info('Dev eval result: {}'.format(bleu_rouge))
pass_end_time = time.time() pass_end_time = time.time()
logger.info('Evaluating the model after epoch {}'.format(pass_id)) logger.info('Evaluating the model after epoch {}'.format(
pass_id))
if brc_data.dev_set is not None: if brc_data.dev_set is not None:
eval_loss, bleu_rouge = validation(inference_program, avg_cost, eval_loss, bleu_rouge = validation(
s_probs, e_probs, feed_order, inference_program, avg_cost, s_probs, e_probs,
place, vocab, brc_data, logger, args) feed_order, place, vocab, brc_data, logger, args)
logger.info('Dev eval loss {}'.format(eval_loss)) logger.info('Dev eval loss {}'.format(eval_loss))
logger.info('Dev eval result: {}'.format(bleu_rouge)) logger.info('Dev eval result: {}'.format(bleu_rouge))
else: else:
...@@ -344,7 +352,10 @@ def train(logger, args): ...@@ -344,7 +352,10 @@ def train(logger, args):
os.makedirs(model_path) os.makedirs(model_path)
fluid.io.save_persistables( fluid.io.save_persistables(
executor=exe, dirname=model_path, main_program=main_program) executor=exe,
dirname=model_path,
main_program=main_program)
def evaluate(logger, args): def evaluate(logger, args):
logger.info('Load data_set and vocab...') logger.info('Load data_set and vocab...')
...@@ -352,8 +363,8 @@ def evaluate(logger, args): ...@@ -352,8 +363,8 @@ def evaluate(logger, args):
vocab = pickle.load(fin) vocab = pickle.load(fin)
logger.info('vocab size is {} and embed dim is {}'.format(vocab.size( logger.info('vocab size is {} and embed dim is {}'.format(vocab.size(
), vocab.embed_dim)) ), vocab.embed_dim))
brc_data = BRCDataset(args.max_p_num, args.max_p_len, args.max_q_len, brc_data = BRCDataset(
dev_files=args.devset) args.max_p_num, args.max_p_len, args.max_q_len, dev_files=args.devset)
logger.info('Converting text into ids...') logger.info('Converting text into ids...')
brc_data.convert_to_ids(vocab) brc_data.convert_to_ids(vocab)
logger.info('Initialize the model...') logger.info('Initialize the model...')
...@@ -361,37 +372,39 @@ def evaluate(logger, args): ...@@ -361,37 +372,39 @@ def evaluate(logger, args):
# build model # build model
main_program = fluid.Program() main_program = fluid.Program()
startup_prog = fluid.Program() startup_prog = fluid.Program()
main_program.random_seed=args.random_seed main_program.random_seed = args.random_seed
startup_prog.random_seed=args.random_seed startup_prog.random_seed = args.random_seed
with fluid.program_guard(main_program, startup_prog): with fluid.program_guard(main_program, startup_prog):
with fluid.unique_name.guard(): with fluid.unique_name.guard():
avg_cost, s_probs, e_probs, feed_order = rc_model.rc_model( avg_cost, s_probs, e_probs, feed_order = rc_model.rc_model(
args.hidden_size, args.hidden_size, vocab, args)
vocab,
args)
# initialize parameters # initialize parameters
place = core.CUDAPlace(0) if args.use_gpu else core.CPUPlace() place = core.CUDAPlace(0) if args.use_gpu else core.CPUPlace()
exe = Executor(place) exe = Executor(place)
if args.load_dir: if args.load_dir:
logger.info('load from {}'.format(args.load_dir)) logger.info('load from {}'.format(args.load_dir))
fluid.io.load_persistables(exe, args.load_dir, main_program=main_program) fluid.io.load_persistables(
exe, args.load_dir, main_program=main_program)
else: else:
logger.error('No model file to load ...') logger.error('No model file to load ...')
return return
# prepare data # prepare data
feed_list = [ feed_list = [
main_program.global_block().var(var_name) for var_name in feed_order main_program.global_block().var(var_name)
for var_name in feed_order
] ]
feeder = fluid.DataFeeder(feed_list, place) feeder = fluid.DataFeeder(feed_list, place)
inference_program = main_program.clone(for_test=True) inference_program = main_program.clone(for_test=True)
eval_loss, bleu_rouge = validation( eval_loss, bleu_rouge = validation(
inference_program, avg_cost, s_probs, e_probs, inference_program, avg_cost, s_probs, e_probs, feed_order,
feed_order, place, vocab, brc_data, logger, args) place, vocab, brc_data, logger, args)
logger.info('Dev eval loss {}'.format(eval_loss)) logger.info('Dev eval loss {}'.format(eval_loss))
logger.info('Dev eval result: {}'.format(bleu_rouge)) logger.info('Dev eval result: {}'.format(bleu_rouge))
logger.info('Predicted answers are saved to {}'.format(os.path.join(args.result_dir))) logger.info('Predicted answers are saved to {}'.format(
os.path.join(args.result_dir)))
def predict(logger, args): def predict(logger, args):
logger.info('Load data_set and vocab...') logger.info('Load data_set and vocab...')
...@@ -399,8 +412,8 @@ def predict(logger, args): ...@@ -399,8 +412,8 @@ def predict(logger, args):
vocab = pickle.load(fin) vocab = pickle.load(fin)
logger.info('vocab size is {} and embed dim is {}'.format(vocab.size( logger.info('vocab size is {} and embed dim is {}'.format(vocab.size(
), vocab.embed_dim)) ), vocab.embed_dim))
brc_data = BRCDataset(args.max_p_num, args.max_p_len, args.max_q_len, brc_data = BRCDataset(
dev_files=args.testset) args.max_p_num, args.max_p_len, args.max_q_len, dev_files=args.testset)
logger.info('Converting text into ids...') logger.info('Converting text into ids...')
brc_data.convert_to_ids(vocab) brc_data.convert_to_ids(vocab)
logger.info('Initialize the model...') logger.info('Initialize the model...')
...@@ -408,34 +421,35 @@ def predict(logger, args): ...@@ -408,34 +421,35 @@ def predict(logger, args):
# build model # build model
main_program = fluid.Program() main_program = fluid.Program()
startup_prog = fluid.Program() startup_prog = fluid.Program()
main_program.random_seed=args.random_seed main_program.random_seed = args.random_seed
startup_prog.random_seed=args.random_seed startup_prog.random_seed = args.random_seed
with fluid.program_guard(main_program, startup_prog): with fluid.program_guard(main_program, startup_prog):
with fluid.unique_name.guard(): with fluid.unique_name.guard():
avg_cost, s_probs, e_probs, feed_order = rc_model.rc_model( avg_cost, s_probs, e_probs, feed_order = rc_model.rc_model(
args.hidden_size, args.hidden_size, vocab, args)
vocab,
args)
# initialize parameters # initialize parameters
place = core.CUDAPlace(0) if args.use_gpu else core.CPUPlace() place = core.CUDAPlace(0) if args.use_gpu else core.CPUPlace()
exe = Executor(place) exe = Executor(place)
if args.load_dir: if args.load_dir:
logger.info('load from {}'.format(args.load_dir)) logger.info('load from {}'.format(args.load_dir))
fluid.io.load_persistables(exe, args.load_dir, main_program=main_program) fluid.io.load_persistables(
exe, args.load_dir, main_program=main_program)
else: else:
logger.error('No model file to load ...') logger.error('No model file to load ...')
return return
# prepare data # prepare data
feed_list = [ feed_list = [
main_program.global_block().var(var_name) for var_name in feed_order main_program.global_block().var(var_name)
for var_name in feed_order
] ]
feeder = fluid.DataFeeder(feed_list, place) feeder = fluid.DataFeeder(feed_list, place)
inference_program = main_program.clone(for_test=True) inference_program = main_program.clone(for_test=True)
eval_loss, bleu_rouge = validation( eval_loss, bleu_rouge = validation(
inference_program, avg_cost, s_probs, e_probs, inference_program, avg_cost, s_probs, e_probs, feed_order,
feed_order, place, vocab, brc_data, logger, args) place, vocab, brc_data, logger, args)
def prepare(logger, args): def prepare(logger, args):
""" """
...@@ -443,7 +457,8 @@ def prepare(logger, args): ...@@ -443,7 +457,8 @@ def prepare(logger, args):
""" """
logger.info('Checking the data files...') logger.info('Checking the data files...')
for data_path in args.trainset + args.devset + args.testset: for data_path in args.trainset + args.devset + args.testset:
assert os.path.exists(data_path), '{} file does not exist.'.format(data_path) assert os.path.exists(data_path), '{} file does not exist.'.format(
data_path)
logger.info('Preparing the directories...') logger.info('Preparing the directories...')
for dir_path in [args.vocab_dir, args.save_dir, args.result_dir]: for dir_path in [args.vocab_dir, args.save_dir, args.result_dir]:
if not os.path.exists(dir_path): if not os.path.exists(dir_path):
...@@ -459,8 +474,8 @@ def prepare(logger, args): ...@@ -459,8 +474,8 @@ def prepare(logger, args):
unfiltered_vocab_size = vocab.size() unfiltered_vocab_size = vocab.size()
vocab.filter_tokens_by_cnt(min_cnt=2) vocab.filter_tokens_by_cnt(min_cnt=2)
filtered_num = unfiltered_vocab_size - vocab.size() filtered_num = unfiltered_vocab_size - vocab.size()
logger.info('After filter {} tokens, the final vocab size is {}'.format(filtered_num, logger.info('After filter {} tokens, the final vocab size is {}'.format(
vocab.size())) filtered_num, vocab.size()))
logger.info('Assigning embeddings...') logger.info('Assigning embeddings...')
vocab.randomly_init_embeddings(args.embed_size) vocab.randomly_init_embeddings(args.embed_size)
...@@ -471,6 +486,7 @@ def prepare(logger, args): ...@@ -471,6 +486,7 @@ def prepare(logger, args):
logger.info('Done with preparing!') logger.info('Done with preparing!')
if __name__ == '__main__': if __name__ == '__main__':
args = parse_args() args = parse_args()
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册