提交 d0319531 编写于 作者: Q qiuxuezhong

update README.md

上级 b3175bd2
# DuReader Dataset # Abstract
DuReader is a new large-scale real-world and human sourced MRC dataset in Chinese. DuReader focuses on real-world open-domain question answering. The advantages of DuReader over existing datasets are concluded as follows: Dureader is an end-to-end neural networks model for machine reading comprehesion style question answering, which aims to anser questions from given passages. We first match the question and passage with a bidireactional attention flow networks to obtrain the question-aware passages represenation. Then we employ the pointer networks to locate the positions of answers from passages. Our experimental evalutions show that DuReader model achieves the state-of-the-art results in DuReader Dadaset.
# Dataset
DuReader Dataset is a new large-scale real-world and human sourced MRC dataset in Chinese. DuReader focuses on real-world open-domain question answering. The advantages of DuReader over existing datasets are concluded as follows:
- Real question - Real question
- Real article - Real article
- Real answer - Real answer
- Real application scenario - Real application scenario
- Rich annotation - Rich annotation
# DuReader Network # Network
DuReader is inspired by 3 classic reading comprehension models([BiDAF](https://arxiv.org/abs/1611.01603), [Match-LSTM](https://arxiv.org/abs/1608.07905), [R_NET](https://www.microsoft.com/en-us/research/wp-content/uploads/2017/05/r-net.pdf)). DuReader is inspired by 3 classic reading comprehension models([BiDAF](https://arxiv.org/abs/1611.01603), [Match-LSTM](https://arxiv.org/abs/1608.07905), [R-NET](https://www.microsoft.com/en-us/research/wp-content/uploads/2017/05/r-net.pdf)).
Attention Flow from [BiDAF](https://arxiv.org/abs/1611.01603) DuReader model is a hierarchical multi_stage process adn consist of five layers
Anwser Point Network from [Match-LSTM](https://arxiv.org/abs/1608.07905) - **Word Embedding Layer** maps each word to a vector space using a pre-trained word embedding model.
- **Encoding Layer** extract context infomation for each position in question and passages with bi-directional LSTM network.
- **Attention Flow Layer** couples the query and context vectors and produces a set of query-aware feature vectors for each word in the context. Please refer to [BiDAF](https://arxiv.org/abs/1611.01603) for more details.
- **Fusion Layer** employs two layers of bi-directional LSTM to capture the interaction among context words independent of the query.
- **Answer Point Network Layer with Attention Pooling** please refer to [Match-LSTM](https://arxiv.org/abs/1608.07905) and [R_NET](https://www.microsoft.com/en-us/research/wp-content/uploads/2017/05/r-net.pdf) for more details.
Attention Pooling from [R_NET](https://www.microsoft.com/en-us/research/wp-content/uploads/2017/05/r-net.pdf)
## How to Run ## How to Run
### Download the Dataset ### Download the Dataset
To Download DuReader dataset: To Download DuReader dataset:
...@@ -36,8 +41,6 @@ cat data/raw/trainset/search.train.json | python utils/preprocess.py > data/prep ...@@ -36,8 +41,6 @@ cat data/raw/trainset/search.train.json | python utils/preprocess.py > data/prep
``` ```
The preprocessed data can be automatically downloaded by `data/download.sh`, and is stored in `data/preprocessed`, the raw data before preprocessing is under `data/raw`. The preprocessed data can be automatically downloaded by `data/download.sh`, and is stored in `data/preprocessed`, the raw data before preprocessing is under `data/raw`.
### Run with PaddlePaddle
#### Get the Vocab File #### Get the Vocab File
Once the preprocessed data is ready, you can run `utils/get_vocab.py` to generate the vocabulary file, for example, if you want to train model with Baidu Search data: Once the preprocessed data is ready, you can run `utils/get_vocab.py` to generate the vocabulary file, for example, if you want to train model with Baidu Search data:
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册