提交 6e5a2af4 编写于 作者: W wangxiao1021

add examples

上级 fac8802f
*.pyc *.pyc
__pycache__ __pycache__
pretrain_model pretrain_model
pretrain
output*
output_model output_model
build build
dist dist
......
# PaddlePALM # PaddlePALM
PaddlePALM (Paddle for Multi-task) 是一个强大快速、灵活易用的NLP大规模多任务学习框架。通过PaddlePALM,用户可以轻松完成复杂的多任务学习与参数复用,无缝集成「**单任务训练**」、「**多任务辅助训练**」和「**多目标任务联合训练**」这 *3* 种训练方式和灵活的保存与预测机制,且仅需书写极少量代码即可”一键启动”高性能单机单卡和分布式训练与推理。 PaddlePALM (PArallel Learning from Multi-tasks) is a flexible, general and easy-to-use NLP large-scale pretraining and multi-task learning friendly framework. PALM is a high level framework aiming at **fastly** develop **high-performance** NLP models. With PALM, 8 steps to achieve a typical NLP task for supervised learning or pretraining. 6 steps to achieve multi-task learning for prepared tasks. Zero steps to adapt your code to large-scale training/inference (with multiple GPUs and multiple computation nodes).
框架中内置了丰富的[主干网络](#附录b内置主干网络backbone)及其[预训练模型](#预训练模型)(BERT、ERNIE等)、常见的[任务范式](#附录c内置任务范式paradigm)(分类、匹配、机器阅读理解等)和相应的[数据集读取与处理工具](#附录a内置数据集载入与处理工具reader)。同时框架提供了用户自定义接口,若内置工具、主干网络和任务无法满足需求,开发者可以轻松完成相关组件的自定义。各个组件均为零耦合设计,用户仅需完成组件本身的特性开发即可完成与框架的融合。 PaddlePALM also provides state-of-the-art general purpose architectures (BERT,ERNIE,RoBERTa,...) as build-in model backbones. We have decoupled the model backbone, dataset reader and task output layers, so that you can easily replace any of the component to other candidates with quite minor changes of your code. In addition, PaddlePALM support customized development of any component, e.g, backbone, task head, reader and optimizer, which gives high flexibility for developers to adapt to complicated NLP scenes.
## 目录 然后给出一些成功案例和一些公开数据集的各个backbone的实验结果(BERT、ERNIE、RoBERTa)和一些成功的多任务学习示例。
- [安装](#安装) <table>
- [前期准备](#前期准备) <tbody>
- [理论准备](#理论准备) <tr>
- [框架原理](#框架原理) <th><strong>Dataset</strong>
- [预训练模型](#预训练模型) <br></th>
- [三个DEMO入门PaddlePALM](#三个demo入门paddlepalm) <th colspan="3"><center><strong>chnsenticorp</strong></center></th>
- [DEMO1:单任务训练](#demo1单任务训练) <th colspan="3"><center><strong>Quora Question Pairs matching</strong><center></th>
- [DEMO2:多任务辅助训练与目标任务预测](#demo2多任务辅助训练与目标任务预测) <th colspan="3"><strong>MSRA-NER<br>(SIGHAN2006)</strong></th>
- [DEMO3:多目标任务联合训练与任务层参数复用](#demo3多目标任务联合训练与任务层参数复用) <th colspan="2"><strong>CMRC2018</strong></th>
- [进阶篇](#进阶篇) </tr>
- [配置广播机制](#配置广播机制) <tr>
- [reader、backbone与paradigm的选择](#readerbackbone与paradigm的选择) <td rowspan="2">
- [多目标任务下的训练终止条件与预期训练步数](#多目标任务下的训练终止条件与预期训练步数) <p>
- [多个目标任务](#多个目标任务) <strong>Metric</strong>
- [训练终止条件](#训练终止条件) <br></p>
- [任务采样概率与预期训练步数](#任务采样概率与预期训练步数) </td>
- [多个目标任务时预期训练步数的计算](#多个目标任务时预期训练步数的计算) <td colspan="1">
- [模型保存与预测机制](#模型保存与预测机制) <center><strong>precision</strong></center>
- [分布式训练](#分布式训练) <br></td>
- [附录A:内置数据集载入与处理工具(reader)](#附录a内置数据集载入与处理工具reader) <td colspan="1">
- [附录B:内置主干网络(backbone)](#附录b内置主干网络backbone) <strong>recall</strong>
- [附录C:内置任务范式(paradigm)](#附录c内置任务范式paradigm) <br></td>
- [附录D:可配置的全局参数列表](#附录d可配置的全局参数列表) <td colspan="1">
<strong>f1-score</strong>
<strong></strong>
## 安装 <br></td>
<td colspan="1">
推荐使用pip安装paddlepalm框架: <center><strong>precision</strong></center>
<br></td>
```shell <td colspan="1">
<strong>recall</strong>
<br></td>
<td colspan="1">
<strong>f1-score</strong>
<strong></strong>
<br></td>
<td colspan="1">
<center><strong>precision</strong></center>
<br></td>
<td colspan="1">
<strong>recall</strong>
<br></td>
<td colspan="1">
<strong>f1-score</strong>
<strong></strong>
<br></td>
<td colspan="1">
<strong>em</strong>
<br></td>
<td colspan="1">
<strong>f1-score</strong>
<br></td>
</tr>
<tr>
<td colspan="3" width="">
<strong>test</strong>
<br></td>
<td colspan="3" width="">
<strong>test</strong>
<br></td>
<td colspan="3" width="">
<strong>test</strong>
<br></td>
<td colspan="2" width="">
<strong>dev</strong>
<br></td>
</tr>
<tr>
<td><strong>ERNIE Base</strong></td>
<td>95.7</td>
<td>95.0</td>
<td>95.7</td>
<td>85.8</td>
<td>82.4</td>
<td>81.5</td>
<td>94.9</td>
<td>94.5</td>
<td>94.7</td>
<td>96.3</td>
<td>84.0</td>
</tr>
</tbody>
</table>
## Package Overview
| module | illustration |
| - | - |
| **paddlepalm** | an open source NLP pretraining and multitask learning framework, built on paddlepaddle. |
| **paddlepalm.reader** | a collection of elastic task-specific dataset readers. |
| **paddlepalm.backbone** | a collection of classic NLP representation models, e.g., BERT, ERNIE, RoBERTa. |
| **paddlepalm.head** | a collection of task-specific output layers. |
| **paddlepalm.lr_sched** | a collection of learning rate schedualers. |
| **paddlepalm.optimizer** | a collection of optimizers. |
| **paddlepalm.downloader** | a download module for pretrained models with configure and vocab files. |
| **paddlepalm.Trainer** | the core unit to start a single task training/predicting session. A trainer is to build computation graph, manage training and evaluation process, achieve model/checkpoint saving and pretrain_model/checkpoint loading.|
| **paddlepalm.MultiHeadTrainer** | the core unit to start a multi-task training/predicting session. A MultiHeadTrainer is built based on several Trainers. Beyond the inheritance of Trainer, it additionally achieves model backbone reuse across tasks, trainer sampling for multi-task learning, and multi-head inference for effective evaluation and prediction. |
## Installation
PaddlePALM support both python2 and python3, linux and windows, CPU and GPU. The preferred way to install PaddlePALM is via `pip`. Just run following commands in your shell environment.
```bash
pip install paddlepalm pip install paddlepalm
``` ```
对于离线机器,可以使用基于源码的安装方式: ### Installing via source
```shell ```shell
git clone https://github.com/PaddlePaddle/PALM.git git clone https://github.com/PaddlePaddle/PALM.git
cd PALM && python setup.py install cd PALM && python setup.py install
``` ```
### Library Dependencies
**环境依赖**
- Python >= 2.7 - Python >= 2.7
- cuda >= 9.0 - cuda >= 9.0
- cudnn >= 7.0 - cudnn >= 7.0
- PaddlePaddle >= 1.6.1 (请参考[安装指南](http://www.paddlepaddle.org/#quick-start)进行安装) - PaddlePaddle >= 1.6.3 (请参考[安装指南](http://www.paddlepaddle.org/#quick-start)进行安装)
## 框架代码结构
```text
.
├── mtl_controller.py # 任务控制器,负责创建和调度各个任务实例来完成多任务学习
├── task_instance.py # 任务实例类,完成任务实例的配置管理、训练进程管理、保存与载入等
├── default_settings.py # 默认的环境变量和框架配置
├── utils # 框架核心工具集
│ ├── config_helper.py # 配置工具类,完成命令行与json、yaml的联合解析
│ ├── reader_helper.py # 完成多任务数据集iterators的合并、采样、调度和归一化,连接python生成器与计算图
│ ├── saver.py # 模型保存与载入
│ ├── print_helper.py # 日志打印规范化工具
│ ├── plot_helper.py # 命令行绘图工具
│ └── textprocess_helper.py # 文本数据处理工具函数
├── backbone # 框架预置的主干网络
│ ├── ernie.py # ERNIE模型
│ ├── bert.py # BERT模型
│ └── utils # 实现主干网络的一些可复用的工具函数
├── reader # 框架内置的数据集载入与处理工具
│ ├── cls.py # 文本分类数据集工具
│ ├── match.py # 文本匹配数据集工具
│ ├── mrc.py # 机器阅读理解数据集工具
│ └── mlm.py # 掩码语言模型(mask language model)数据集生成与处理工具
└── paradigm # 任务范式
├── cls.py # 文本分类
├── match.py # 文本匹配
├── mrc.py # 机器阅读理解
└── mlm.py # 掩码语言模型(mask language model)
```
## 前期准备
### 理论准备
框架默认采用一对多(One-to-Many)的参数共享方式,如图
![image-20191022194400259](https://tva1.sinaimg.cn/large/006y8mN6ly1g88ajvpqmgj31hu07wn5s.jpg)
例如我们有一个目标任务MRC和两个辅助任务MLM和MATCH,我们希望通过MLM和MATCH来提高目标任务MRC的测试集表现(即提升模型泛化能力),那么我们可以令三个任务共享相同的文本特征抽取模型(例如BERT、ERNIE等),然后分别为每个任务定义输出层,计算各自的loss值。
框架默认采用任务采样+mini-batch采样的方式(alternating mini-batches optimization)进行模型训练,即对于每个训练step,首先对候选任务进行采样(采样权重取决于用户设置的`mix_ratio`),而后从该任务的训练集中采样出一个mini-batch(采样出的样本数取决于用户设置的`batch_size`)。
### 框架原理
paddlepalm框架的运行原理图如图所示
![PALM原理图](https://tva1.sinaimg.cn/large/006y8mN6ly1g8j1isf3fcj31ne0tyqbd.jpg)
首先用户为数据集载入与处理、主干网络和任务编写配置文件(框架实现了灵活易用的[配置广播机制](#配置广播机制)),而后用其创建多任务学习的控制器(`Controller`)。进而控制器创建任务实例,并根据用户调用的训练和预测接口对其参数和各个任务实例进行管理和调度。下面我们通过三个DEMO和进阶篇来快速入门并更深入的理解paddlepalm。
### 预训练模型
#### 下载
我们提供了BERT、ERNIE等主干网络的相关预训练模型。为了加速模型收敛,获得更佳的测试集表现,我们强烈建议用户在多任务学习时尽量在预训练模型的基础上进行(而不是从参数随机初始化开始)。用户可通过运行`script/download_pretrain_models <model_name>`下载需要的预训练模型,例如,下载预训练BERT模型(uncased large)的命令如下
```shell
bash script/download_pretrain_backbone.sh bert
```
脚本会自动在**当前文件夹**中创建一个pretrain_model目录(注:运行DEMO时,需保证pretrain_model文件夹在PALM项目目录下),并在其中创建bert子目录,里面存放预训练模型(`params`文件夹内)、相关的网络参数(`bert_config.json`)和字典(`vocab.txt`)。除了BERT模型,脚本还提供了ERNIE预训练模型(uncased large)的一键下载,将`<model_name>`改成`ernie`即可。全部可用的预训练模型列表见[paddlenlp/lark](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleLARK)
#### 转换
注意,预训练模型不能直接被框架使用。我们提供了转换脚本可以将其转换成paddlepalm的模型格式。如下,通过运行`script/convert_params.sh`可将预训练模型bert转换成框架的模型格式。
```shell
bash script/convert_params.sh pretrain_model/bert/params
```
注意,以下恢复操作在执行后述DEMO流程中**无需执行**
若用户需将转换成的paddlepalm模型恢复为原始的预训练模型,可以运行`script/recover_params.sh`进行恢复。
```shell
bash script/recover_params.sh pretrain_model/bert/params
```
## 三个DEMO入门PaddlePALM
### DEMO1:单任务训练
框架支持对任何一个内置任务进行传统的单任务训练。接下来我们启动一个复杂的机器阅读理解任务的训练,我们在`data/mrqa`文件夹中提供了[EMNLP2019 MRQA机器阅读理解评测](https://mrqa.github.io/shared)的部分比赛数据。下面我们利用该数据尝试完成一个基于BERT的机器阅读理解任务MRQA的单任务学习。
用户可通过运行如下脚本一键开始本节任务的训练
```shell
bash run_demo1.sh
```
下面以该任务为例,讲解如何基于paddlepalm框架轻松实现该任务。
**1. 配置任务实例**
首先,我们编写该任务实例的配置文件`mrqa.yaml`,若该任务实例参与训练或预测,则框架将自动解析该配置文件并创建相应的任务实例。配置文件需符合yaml格式的要求。一个任务实例的配置文件最少应包含`train_file``reader``paradigm`这三个字段,分别代表训练集的文件路径`train_file`、使用的数据集载入与处理工具`reader`、任务范式`paradigm`
```yaml
train_file: data/mrqa/train.json
reader: mrc
paradigm: mrc
```
*注:框架内置的其他数据集载入与处理工具见[这里](#附录a内置数据集载入与处理工具reader),任务范式列表见[这里](#附录c内置任务范式paradigm)*
此外,我们还需要配置reader的预处理规则,各个预置reader支持的预处理配置和规则请参考[这里](#附录a内置数据集载入与处理工具reader)。预处理规则同样直接写入`mrqa.yaml`中。
```yaml
max_seq_len: 512
max_query_len: 64
doc_stride: 128 # 在MRQA数据集中,存在较长的文档,因此我们这里使用滑动窗口处理样本,滑动步长设置为128
do_lower_case: True
vocab_path: "pretrain_model/bert/vocab.txt"
```
更详细的任务实例配置方法(为任务实例选择合适的reader、paradigm和backbone)可参考[这里](#readerbackbone与paradigm的选择)
**2.配置backbone和训练规则**
然后我们编写全局配置文件`config_demo1.yaml`。在这里可以完成对主干网络(backbone)、多任务学习规则以及[广播到任务实例](#配置广播机制)的配置。同样使用yaml格式描述,例如在这里我们可以配置一下需要学习的任务`task_instance`、模型的保存路径`save_path`、基于的主干网络`backbone`、优化器`optimizer`等。
```yaml
task_instance: "mrqa"
save_path: "output_model/firstrun"
backbone: "bert"
backbone_config_path: "pretrain_model/bert/bert_config.json"
optimizer: "adam"
learning_rate: 3e-5
batch_size: 4
num_epochs: 2
warmup_proportion: 0.1
```
这里的task_instance即填写我们刚刚编写的任务实例配置文件的文件名`mrqa`**(注意不要包括.yaml后缀!)**。框架启动多任务学习后会根据`task_instance`中指定的任务实例来寻找相关配置文件,并创建任务实例。
此外,backbone的相关配置除了可以直接写入全局配置文件以外,还可以在额外的一个json文件中进行描述,并在全局配置文件中通过`backbone_config_path`进行该配置文件路径的指定。
*注:框架支持的其他内置全局参数见[这里](#附录d可配置的全局参数列表)*
**3.开始训练**
下面我们开始尝试启动MRQA任务的训练(该代码位于`demo1.py`中)。如[框架原理](#框架原理)所述,框架的核心组件是`Controller`,负责多任务学习的启动。
```python
# Demo 1: single task training of MRQA
import paddlepalm as palm
if __name__ == '__main__':
controller = palm.Controller('config_demo1.yaml', task_dir='demo1_tasks')
controller.load_pretrain('pretrain_model/bert/params')
controller.train()
```
训练日志如下,可以看到loss值随着训练收敛。在训练结束后,`Controller`自动为mrqa任务保存预测模型。
```
Global step: 10. Task: mrqa, step 10/135 (epoch 0), loss: 5.928, speed: 0.67 steps/s
Global step: 20. Task: mrqa, step 20/135 (epoch 0), loss: 4.594, speed: 0.75 steps/s
Global step: 30. Task: mrqa, step 30/135 (epoch 0), loss: 1.663, speed: 0.75 steps/s
...
Global step: 250. Task: mrqa, step 115/135 (epoch 1), loss: 1.391, speed: 0.75 steps/s
Global step: 260. Task: mrqa, step 125/135 (epoch 1), loss: 1.871, speed: 0.75 steps/s
Global step: 270. Task: mrqa, step 135/135 (epoch 1), loss: 1.544, speed: 0.75 steps/s
mrqa: train finished!
mrqa: inference model saved at output_model/firstrun/mrqa/infer_model
```
### DEMO2:多任务辅助训练与目标任务预测
本节我们考虑更加复杂的学习目标,我们引入一个掩码语言模型(Mask Language Model,MLM)问答匹配(QA Match)任务来辅助上一节MRQA任务的训练,相关训练数据分别位于`data/mlm4mrqa``data/match4mrqa`。并且我们这里换用ERNIE模型作为主干网络,来获得更佳的效果。在多任务训练结束后,我们使用训练好的模型来对MRQA任务的测试集进行预测。
用户可通过运行如下脚本直接开始本节任务的训练 ### Downloading pretrain models
We incorporate many pretrained models to initialize model backbone parameters. Training big NLP model, e.g., 12-layer transformers, with pretrained models is practically much more effective than that with randomly initialized parameters. To see all the available pretrained models and download, run following code in python interpreter (input command `python` in shell):
```shell
bash run_demo2.sh
```
下面以该任务为例,讲解如何基于paddlepalm框架轻松实现这个复杂的多任务学习。
**1. 配置任务实例**
首先,我们像上一节一样为MLM任务和Matching任务分别创建任务实例的配置文件`mlm4mrqa.yaml``match4mrqa.yaml`
```yaml
----- mlm4mrqa.yaml -----
train_file: "data/mlm4mrqa/train.tsv"
reader: mlm
paradigm: mlm
----- match4mrqa.yaml -----
train_file: "data/match/train.tsv"
reader: match
paradigm: match
```
由于我们在训练结束后要对MRQA任务的测试集进行预测,因此我们要在之前写好的`mrqa.yaml`中追加预测相关的配置
```yaml
pred_file: data/mrqa/dev.json
pred_output_path: 'mrqa_output'
max_answer_len: 30
n_best_size: 20
```
**2.配置全局参数**
由于MRQA、MLM和Matching任务有相同的字典、大小写配置、截断长度等,因此我们可以将这些各个任务中相同的参数写入到全局配置文件`mtl_config.yaml`中,**框架会自动将该文件中的配置广播(broadcast)到各个任务实例。**
```yaml
task_instance: "mrqa, mlm4mrqa, match4mrqa"
target_tag: 1,0,0
save_path: "output_model/secondrun"
backbone: "ernie"
backbone_config_path: "pretrain_model/ernie/ernie_config.json"
vocab_path: "pretrain_model/ernie/vocab.txt"
do_lower_case: True
max_seq_len: 512 # 写入全局配置文件的参数会被自动广播到各个任务实例
batch_size: 4
num_epochs: 2
optimizer: "adam"
learning_rate: 3e-5
warmup_proportion: 0.1
weight_decay: 0.1
```
这里我们可以使用`target_tag`来标记目标任务和辅助任务,各个任务的tag使用逗号`,`隔开。target_tag与task_instance中的元素一一对应,当某任务的tag设置为1时,表示对应的任务被设置为目标任务;设置为0时,表示对应的任务被设置为辅助任务,默认情况下所以任务均被设置为目标任务(即默认`target_tag`为全1)。
辅助任务不会保存预测模型,且不会影响训练的终止,仅仅起到“陪同训练”的作用以期提高模型的泛化能力。当所有的目标任务达到预期的训练步数后多任务学习终止,框架自动为每个目标任务保存预测模型(inference model)到设置的`save_path`位置。
同时需要注意的是,这里`num_epochs`指代目标任务`mrqa`的训练epoch数量(训练集遍历次数)。
在训练过程中,默认每个训练step会从各个任务等概率采样,来决定当前step训练哪个任务。但包括辅助任务在内,各个任务的采样概率是可以被控制的。若用户希望改变采样比率,可以通过`mix_ratio`字段来进行设置,例如
```yaml
mix_ratio: 1.0, 0.5, 0.5
```
若将如上设置加入到全局配置文件中,则辅助任务`mlm4mrqa``match4mrqa`的采样概率/预估的训练步数仅为`mrqa`任务的一半。关于采样概率的更多介绍请参考进阶篇。
**3.开始多任务训练**
```python
import paddlepalm as palm
if __name__ == '__main__':
controller = palm.Controller('config_demo2.yaml', task_dir='demo2_tasks')
controller.load_pretrain('pretrain_model/ernie/params')
controller.train()
```
训练日志如下,在训练过程中可以看到每个任务的loss下降
```
Global step: 10. Task: mrqa, step 4/135 (epoch 0), loss: 6.235, speed: 0.75 steps/s
Global step: 20. Task: mrqa, step 8/135 (epoch 0), loss: 5.652, speed: 0.75 steps/s
Global step: 30. Task: mrqa, step 13/135 (epoch 0), loss: 6.031, speed: 0.75 steps/s
Global step: 40. Task: match4mrqa, step 13/25 (epoch 0), loss: 0.758, speed: 2.52 steps/s
Global step: 50. Task: mlm4mrqa, step 14/30 (epoch 0), loss: 7.322, speed: 3.24 steps/s
...
Global step: 547. Task: match4mrqa, step 13/25 (epoch 5), loss: 0.400, speed: 2.23 steps/s
Global step: 548. Task: match4mrqa, step 14/25 (epoch 5), loss: 0.121, speed: 3.03 steps/s
Global step: 549. Task: mrqa, step 134/135 (epoch 1), loss: 0.824, speed: 0.75 steps/s
Global step: 550. Task: mlm4mrqa, step 22/30 (epoch 4), loss: 6.903, speed: 3.59 steps/s
Global step: 551. Task: mrqa, step 135/135 (epoch 1), loss: 3.408, speed: 0.75 steps/s
mrqa: train finished!
mrqa: inference model saved at output_model/secondrun/mrqa/infer_model
```
**4.预测**
在得到目标任务的预测模型(inference_model)后,我们可以加载预测模型对该任务的测试集进行预测。在多任务训练阶段,在全局配置文件的`save_path`指定的路径下会为每个目标任务创建同名子目录,子目录中都有预测模型文件夹`infermodel`。我们可以将该路径传给框架的`controller`来完成对该目标任务的预测。
例如,我们在上一节得到了mrqa任务的预测模型。首先创建一个新的*Controller***并且创建时要将`for_train`标志位置为*False***。而后调用*pred*接口,将要预测的任务实例名字和预测模型的路径传入,即可完成相关预测。预测的结果默认保存在任务实例配置文件的`pred_output_path`指定的路径中。代码段如下:
```python ```python
controller = palm.Controller(config='config_demo2.yaml', task_dir='demo2_tasks', for_train=False) >>> from paddlepalm import downloader
controller.pred('mrqa', inference_model_dir='output_model/secondrun/mrqa/infermodel') >>> downloader.ls('pretrain')
``` Available pretrain items:
=> roberta-cn-base
我们可以在刚刚yaml文件中设置的`mrqa_output/`文件夹下的`predictions.json`文件中看到类似如下的预测结果 => roberta-cn-large
=> bert-cn-base
```json => bert-cn-large
{ => bert-en-uncased-base
"3f02f171c82e49828580007a71eefc31": "Ethan Allen", => bert-en-uncased-large
"98d0b8ce19d1434abdb42aa01e83db61": "McDonald's", => bert-en-cased-base
"f0bc45a4dd7a4d8abf91a5e4fb25fe57": "Jesse James", => bert-en-cased-large
... => ernie-en-uncased-base
} => ernie-en-uncased-large
``` ...
其中的每一行是测试集中的一个question对应的预测答案(其中的key为question的id,详情见mrc reader的说明文档)。 >>> downloader.download('pretrain', 'bert-en-uncased-base', './pretrain_models')
### DEMO3:多目标任务联合训练与任务层参数复用
本节我们考虑一个更加复杂的大规模多任务学习场景。假如手头有若干任务,其中每个任务都可能将来被用于预测(即均为目标任务),且鉴于这若干个任务之间存在一些相关性,我们希望将其中一部分任务的任务层参数也进行复用。分类数据集位于`data/cls4mrqa`内。
具体来说,例如我们有6个分类任务(CLS1 ~ CLS6),均为目标任务(每个任务的模型都希望未来拿来做预测和部署),且我们希望任务1,2,5的任务输出层共享同一份参数,任务3、4共享同一份参数,任务6自己一份参数,即希望对6个任务实现如图所示的参数复用关系。
![image2](https://tva1.sinaimg.cn/large/006y8mN6ly1g8issdoli5j31ow08ogxv.jpg)
如图,在同一个方框内的任务共享相同的任务层参数。
用户可通过运行如下脚本一键开始学习本节任务目标:
```shell
bash run_demo3.sh
```
**1. 配置任务实例**
为了演示方便,我们使用同一份数据集来创建6个分类的任务实例,分别命名为`cls1.yaml`, `cls2.yaml`, `cls3.yaml`, `cls4.yaml`, `cls5.yaml`, `cls6.yaml`。每个实例的配置文件中填入如下必要字段
```yaml
train_file: "data/cls4mrqa/train.tsv"
reader: cls
paradigm: cls
n_classes: 4
```
**2.配置全局参数**
在paddlepalm中可以轻松完成上述的复杂复用关系的定义,我们使用`task_reuse_tag`来描述任务层的参数复用关系,与`target_tag`一样,`task_reuse_tag`中的元素与`task_instance`一一对应,元素取值相同的任务会自动共享任务层参数,取值不同的任务不复用任务层参数。因此可以在全局配置文件中如下描述
```yaml
task_instance: "cls1, cls2, cls3, cls4, cls5, cls6"
task_reuse_tag: 0, 0, 1, 1, 0, 2
```
同时,这6个任务均为目标任务,因此我们不需要手动设置`target_tag`了(任务默认即为目标任务)。不过,**设置多个目标的情况下,依然可以添加辅助任务陪同这些目标任务进行训练**,这时候就需要引入`target_tag`来区分目标任务和辅助任务了。而后,我们在全局配置文件中写入其他必要的参数(backbone、优化器等)。
```yaml
save_path: "output_model/secondrun"
backbone: "ernie"
backbone_config_path: "pretrain_model/ernie/ernie_config.json"
vocab_path: "pretrain_model/ernie/vocab.txt"
do_lower_case: True
max_seq_len: 512 # 写入全局配置文件的参数会被自动广播到各个任务实例
batch_size: 4
num_epochs: 2
optimizer: "adam"
learning_rate: 3e-5
warmup_proportion: 0.1
weight_decay: 0.1
```
**3.开始多目标任务训练**
最后,我们像DEMO1和DEMO2一样创建`Controller`,实例化各个任务实例、载入预训练模型并启动多任务训练:
```yaml
import paddlepalm as palm
if __name__ == '__main__':
controller = palm.Controller('config_demo3.yaml', task_dir='demo3_tasks')
controller.load_pretrain('pretrain_model/ernie/params')
controller.train()
```
可以看到如下日志输出。
```
Global step: 1. Task: cls4, step 1/15 (epoch 0), loss: 1.344, speed: 0.50 steps/s
Global step: 10. Task: cls4, step 5/15 (epoch 0), loss: 1.398, speed: 2.19 steps/s
Global step: 20. Task: cls2, step 5/15 (epoch 0), loss: 1.260, speed: 2.64 steps/s
cls4: train finished!
cls4: inference model saved at output_model/thirdrun/infer_model
cls5: train finished!
cls5: inference model saved at output_model/thirdrun/infer_model
Global step: 30. Task: cls2, step 7/15 (epoch 0), loss: 0.961, speed: 0.04 steps/s
cls2: train finished!
cls2: inference model saved at output_model/thirdrun/infer_model
Global step: 40. Task: cls6, step 4/15 (epoch 0), loss: 1.412, speed: 2.74 steps/s
Global step: 50. Task: cls2, step 12/15 (epoch 0), loss: 1.011, speed: 2.19 steps/s
cls6: train finished!
cls6: inference model saved at output_model/thirdrun/infer_model
cls1: train finished!
cls1: inference model saved at output_model/thirdrun/infer_model
Global step: 60. Task: cls3, step 7/15 (epoch 0), loss: 1.363, speed: 2.72 steps/s
cls3: train finished!
cls3: inference model saved at output_model/thirdrun/infer_model
```
对本DEMO更深入的理解可以参考[多目标任务下的训练终止条件与预期训练步数](#多目标任务下的训练终止条件与预期训练步数)。
## 进阶篇
本章节更深入的对paddlepalm的使用方法展开介绍,并提供一些提高使用效率的小技巧。
### 配置广播机制
![PALM原理图](https://tva1.sinaimg.cn/large/006y8mN6ly1g8j1isf3fcj31ne0tyqbd.jpg)
要完成多任务学习,我们需要对主干网络、各个任务以及训练方式进行必要的配置,为此,框架实现了一套高效的配置广播机制。如上图,通过yaml语言可以描述主干网络和各个任务实例的相关配置,并存储于文件中。由于任务实例可能有多个,且部分超参数会同时被主干网络和任务实例用到,因此对于这些需要“重复配置”却取值相同的超参数,可以写入全局配置文件中,框架在解析全局配置文件时会自动将其“广播”给主干网络和各个任务实例。
此外,全局配置文件的优先级要高于主干网络和任务实例的配置文件,因此当某个超参数在全局配置文件的取值与其在其余位置的取值冲突时,框架以全局配置文件中的取值为准。
同时,为了方便进行大规模实验和超参数调优,凡是在**全局配置文件**中出现的超参数,均可以通过命令行进行控制,例如,对于如下全局配置文件
```yaml
...
learning_rate: 1e-3
batch_size: 32
... ...
``` ```
我们可能希望通过命令行临时调整学习率`learning_rate`和批大小`batch_size`,因此我们在运行训练脚本时可以通过如下方式对其进行改变。
```shell
python demo3.py --learning_rate 1e-4 --batch_size 64
```
因此,各种配置方式的优先级如下
**命令行 > 全局配置文件 > 任务实例配置文件&主干网络配置文件**
### reader、backbone与paradigm的选择
reader、backbone和paradigm是实现各类任务的三大基础组件,其中reader为数据集载入与处理工具,将一定格式的输入数据集自动转换成确定的输出元素字典(如单词id序列,位置id序列等);backbone为主干网络,将来自reader的一部分输出转换为高阶抽象的输出元素字典(如词向量、句向量、编码器输出的上下文相关词向量等);paradigm为任务范式,将来自reader的一部分输出和backbone输出的对原始输入的高阶抽象转换为训练所需要的loss以及预测所需要的输出等。
框架对这三部分组件的实现基于一种解耦合的设计,每个组件都会包括对输入对象的描述inputs_attr(s)和对输出对象的描述outputs_attr,每个输入或输出对象都会包含名字(描述含义)、形状(tensor shape)和数值类型(data type)。例如,主干网络BERT的输入输出对象的声明如下
```python
@property
def inputs_attr(self):
return {"token_ids": [[None, None], 'int64'],
"position_ids": [[None, None], 'int64'],
"segment_ids": [[None, None], 'int64'],
"input_mask": [[None, None], 'float32']}
@property
def outputs_attr(self):
return {"word_embedding": [[None, None, self._emb_size], 'float32'],
"embedding_table": [[None, self._voc_size, self._emb_size], 'float32'],
"encoder_outputs": [[None, None, self._emb_size], 'float32'],
"sentence_embedding": [[None, self._emb_size], 'float32'],
"sentence_pair_embedding": [[None, self._emb_size], 'float32']}
```
其中`inputs_attr`描述了BERT的输入对象,包含`token_ids`, `position_ids`, `segment_ids`和`input_mask`,并且附带了它们的形状(None表示Tensor在该维度的大小可变)和数据类型。`outputs_attr`则描述了BERT模块能提供的输出对象,包含`word_embedding`, `embedding_table`, `encoder_outputs`等。
当用户创建任务实例时,只需要保证每个组件的输入对象是包含在上游组件的输出内的,那么这些组件就可以搭配在一起使用。其中,backbone的上游组件是reader,paradigm的上游组件同时包含reader和backbone。
### 多目标任务下的训练终止条件与预期训练步数
#### 多个目标任务
框架支持设定多个目标任务,当全局配置文件的`task_instance`字段指定超过一个任务实例时,**这多个任务实例默认均为目标任务(即`target_tag`字段被自动填充为全1)**。对于被设置成目标任务的任务实例,框架会为其计算预期的训练步数并在达到预期训练步数后为其保存预测模型。
当框架存在多个目标任务时,全局配置文件中的`num_epochs`(训练集遍历次数)仅会作用于第一个出现的目标任务,称为主任务(main task)。框架会根据主任务的训练步数来推理其他目标任务的预期训练步数(可通过`mix_ratio`控制,详情见下一节)。**注意,除了用来标记`num_epochs`的作用对象外,主任务与其他目标任务没有任何不同。**
*注意:在多目标任务训练时,依然可以使用辅助任务来提升所有目标任务的测试集表现,但是要注意使用target_tag为引入的辅助任务打上辅助标记「0」*
#### 训练终止条件
在训练开始前,`Controller`会为所有每个目标任务计算出预期的训练步数。当某个目标任务的完成预期的训练步数后,`Controller`保存该任务的预测模型,而后继续按照设定的各任务的采样概率进行多任务训练。当所有目标任务均达到预期的训练步数后,多任务学习终止。需要注意的是,`Controller`不会为辅助任务计算预期训练步数,也不会为其保存预测模型,其仅仅起到“陪同目标任务训练”的作用,不会影响到多任务学习的终止与否。
#### 任务采样概率与预期训练步数
此外,在默认情况下,每个训练step的各个任务被采样到的概率均等,若用户希望更改其中某些任务的采样概率(比如某些任务的训练集较小,希望减少对其采样的次数;或某些任务较难,希望被更多的训练),可以在全局配置文件中通过`mix_ratio`字段控制各个任务的采样概率。例如,我们有三个任务,其中mrqa任务为目标任务,其余为辅助任务,我们对其`mix_ratio`进行如下设定:
```yaml
task_instance: mrqa, match4mrqa, mlm4mrqa
mix_ratio: 1.0, 0.5, 0.5
```
上述设置表示`match4mrqa`和`mlm4mrqa`任务的期望被采样次数均为`mrqa`任务的一半。此时,在mrqa任务被设置为目标任务的情况下,若mrqa任务训练一个epoch要经历5000 steps,且全局配置文件中设置了num_epochs为2,则根据上述`mix_ratio`的设置,mrqa任务将被训练5000\*2\*1.0=10000个steps,而`match4mrqa`任务和`mlm4mrqa`任务都会被训练5000个steps**左右**。
> 注意:若match4mrqa, mlm4mrqa被设置为辅助任务,则实际训练步数可能略多或略少于5000个steps。对于目标任务,则是精确的5000 steps。
#### 多个目标任务时预期训练步数的计算
当存在多个目标任务时,`num_epochs`仅作用于**第一个设定的目标任务(称为“主任务(main task)”)**,而后根据`mix_ratio`的设定为其余目标任务和辅助任务计算出预期的训练步数。
### 模型保存与预测机制
`Controller`可以在训练过程中保存两类模型,一类称为检查点模型(checkpoint),一类为预测模型(inference model)。
检查点模型会描述当前训练时刻的网络全局状态,包括backbone、所有任务以及优化器的全局参数,局部参数,长期变量等,即完整的多任务学习计算图。检查点模型用于训练意外终止时的断点恢复,或分阶段的对相同的模型进行连续训练。对于检查点模型,`Controller`默认不进行保存,但是用户可以通过在全局配置文件中添加`save_every_n_steps`来控制检查点模型的保存频率,例如设置为5000,则表示每5000个全局训练steps就会保存一次检查点模型。检查点模型放置在全局配置文件中设置的`save_path`指定的路径下。
预测模型则描述的是某个任务的完整预测模型,该模型内不会包含其他任务的参数,也不会保存优化器、dropout层等推理阶段不需要的节点。在保存预测模型时,`Controller`会同时保存预测相关的必要配置,如预测模型的输入输出列表,在进行预测时,可以调用实例化后的`Controller`的预测接口`pred`直接对相关任务进行预测。关于预测的用法示例可以参加DEMO2。
### 分布式训练
框架将单机单卡训练与单机多卡训练进行了无缝集成。当环境内有多张可用的GPU显卡时,框架会自动将模型复制到多张卡上,并且对于每个step,每张卡都会计算`batch_size`个训练样本,框架会自动对多卡的梯度进行合并。例如,环境中存在8张显卡,且`batch_size`设置为32时,这时每个step的实际batch size为32\*8=256。
当用户在多卡环境下希望仅用一张卡进行训练时,可以通过改变环境变量[CUDA_VISIBLE_DEVICES](https://devblogs.nvidia.com/cuda-pro-tip-control-gpu-visibility-cuda_visible_devices/)来进行控制。
## 附录A:内置数据集载入与处理工具(reader)
所有的内置reader均同时支持中英文输入数据,**默认读取的数据为英文数据**,希望读入中文数据时,需在配置文件中设置
```yaml
for_cn: True
```
所有的内置reader,均支持以下字段
```yaml
vocab_path(REQUIRED): str类型。字典文件路径。
max_seq_len(REQUIRED): int类型。切词后的序列最大长度(即token ids的最大长度)。注意经过分词后,token ids的数量往往多于原始的单词数(e.g., 使用wordpiece tokenizer时)。
batch_size(REQUIRED): int类型。训练或预测时的批大小(每个step喂入神经网络的样本数)。
train_file(REQUIRED): str类型。训练集文件所在路径。仅进行预测时,该字段可不设置。
pred_file(REQUIRED): str类型。测试集文件所在路径。仅进行训练时,该字段可不设置。
do_lower_case(OPTIONAL): bool类型,默认为False。是否将大写英文字母转换成小写。
shuffle(OPTIONAL): bool类型,默认为True。训练阶段打乱数据集样本的标志位,当置为True时,对数据集的样本进行全局打乱。注意,该标志位的设置不会影响预测阶段(预测阶段不会shuffle数据集)。
seed(OPTIONAL): int类型,默认为。
pred_batch_size(OPTIONAL): int类型。预测阶段的批大小,当该参数未设置时,预测阶段的批大小取决于`batch_size`字段的值。
print_first_n(OPTIONAL): int类型。打印数据集的前n条样本和对应的reader输出,默认为0。
```
#### 文本分类数据集reader工具:cls
该reader完成文本分类数据集的载入与处理,reader接受[tsv格式](https://en.wikipedia.org/wiki/Tab-separated_values)的数据集输入,数据集应该包含两列,一列为样本标签`label`,一列为原始文本`text_a`。数据集范例可参考`data/cls4mrqa`中的数据集文件,格式形如
```
label text_a
1 when was the last time the san antonio spurs missed the playoffshave only missed the playoffs four times since entering the NBA
0 the creation of the federal reserve system was an attempt toReserve System ( also known as the Federal Reserve or simply the Fed ) is the central banking system of the United States of America .
2 group f / 64 was a major backlash against the earlier photographic movement off / 64 was formed , Edward Weston went to a meeting of the John Reed Club , which was founded to support Marxist artists and writers .
0 Bessarabia eventually became under the control of which country?
```
***注意:数据集的第一列必须为header,即标注每一列的列名***
该reader额外包含以下配置字段
```yaml
n_classes(REQUIRED): int类型。分类任务的类别数。
```
reader的输出(生成器每次yield出的数据)包含以下字段
```yaml
token_ids: 一个shape为[batch_size, seq_len]的矩阵,每行是一条样本,其中的每个元素为文本中的每个token对应的单词id。
position_ids: 一个shape为[batch_size, seq_len]的矩阵,每行是一条样本,其中的每个元素为文本中的每个token对应的位置id。
segment_ids: 一个shape为[batch_size, seq_len]的全0矩阵,用于支持BERT、ERNIE等模型的输入。
input_mask: 一个shape为[batch_size, seq_len]的矩阵,其中的每个元素为0或1,表示该位置是否是padding词(为1时代表是真实词,为0时代表是填充词)。
label_ids: 一个shape为[batch_size]的矩阵,其中的每个元素为该样本的类别标签。
task_ids: 一个shape为[batch_size, seq_len]的全0矩阵,用于支持ERNIE模型的输入。
```
当处于预测阶段时,reader所yield出的数据不会包含`label_ids`字段。
#### 文本匹配数据集reader工具:match
该reader完成文本匹配数据集的载入与处理,reader接受[tsv格式](https://en.wikipedia.org/wiki/Tab-separated_values)的数据集输入,数据集应该包含三列,一列为样本标签`label`,其余两列分别为待匹配的文本`text_a`和文本`text_b`。数据集范例可参考`data/match4mrqa`中的数据集文件,格式形如
```yaml
label text_a text_b
1 From what work of Durkheim's was interaction ritual theory derived? **[TAB]** Subsequent to these developments, Randall Collins (2004) formulated his interaction ritual theory by drawing on Durkheim's work on totemic rituals that was extended by Goffman (1964/2013; 1967) into everyday focused encounters. Based on interaction ritual theory, we experience different levels
0 where is port au prince located in haiti **[TAB]** Its population is difficult to ascertain due to the rapid growth of slums in the hillsides
0 What is the world’s first-ever pilsner type blond lager, the company also awarded the Master Homebrewer Competition held in San Francisco to an award-winning brewer who won the prestigious American Homebrewers Associations' Homebrewer of the Year award in 2013? **[TAB]** of the Year award in 2013, becoming the first woman in thirty years, and the first African American person ever to ever win the award.
1 What has Pakistan told phone companies? **[TAB]** Islamabad, Pakistan (CNN) -- Under heavy criticism for a telling cell phone carriers to ban certain words in text messages, the Pakistan Telecommunication Authority went into damage control mode Wednesday.
```
***注意:数据集的第一列必须为header,即标注每一列的列名***
reader的输出(生成器每次yield出的数据)包含以下字段: ## Usage
```yaml 8 steps to start a typical NLP training task.
token_ids: 一个shape为[batch_size, seq_len]的矩阵,每行是一条样本(文本对),其中的每个元素为文本对中的每个token对应的单词id,文本对使用`[SEP]`所对应的id隔开。
position_ids: 一个shape为[batch_size, seq_len]的矩阵,每行是一条样本,其中的每个元素为文本中的每个token对应的位置id。
segment_ids: 一个shape为[batch_size, seq_len]的矩阵,在文本1的token位置,元素取值为0;在文本2的token位置,元素取值为1。用于支持BERT、ERNIE等模型的输入。
input_mask: 一个shape为[batch_size, seq_len]的矩阵,其中的每个元素为0或1,表示该位置是否是padding词(为1时代表是真实词,为0时代表是填充词)。
label_ids: 一个shape为[batch_size]的矩阵,其中的每个元素为该样本的类别标签,为0时表示两段文本不匹配,为1时代表构成匹配。
task_ids: 一个shape为[batch_size, seq_len]的全0矩阵,用于支持ERNIE模型的输入。
```
当处于预测阶段时,reader所yield出的数据不会包含`label_ids`字段。
#### 机器阅读理解数据集reader工具:mrc
该reader支持基于滑动窗口的机器阅读理解数据集载入,可以自动将较长的context按照步长切分成若干子文档,每个子文档与question分别计算答案片段,并在最终阶段合并。该reader接受[json格式]()的数据集。数据集范例可参考`data/mrqa`中的数据集文件,格式如下。
```json
{
"version": "1.0",
"data": [
{"title": "...",
"paragraphs": [
{"context": "...",
"qas": [
{"question": "..."
"id": "..."
"answers": [
{"text": "...",
"answer_start": ...}
{...}
...
]
}
{...}
...
{...},
...
]
}
{...}
...
]
}
```
数据集的最外层数据结构为字典,包含数据集版本号`version`和数据集`data`。在`data`字段内为各个样本,每个样本包含文章标题`title`和若干段落`paragraphs`,在`paragraphs`中的每个元素为一个段落`context`,基于该段落的内容,可以包含若干个问题和对应的答案`qas`,答案均位于该段落内。对于`qas`中的每个元素,包含一个问题`question`和一个全局唯一的标识`id`,以及(若干)答案`answers`。答案中的每个元素包含答案本身`text`及其在`context`中的起始位置`answer_start`。注意起始位置为字符级。此外,在测试集中,`qas`可以不包含`answers`字段。
该reader包含如下额外的可配置字段:
```yaml
doc_stride (REQUIRED): int类型。对context应用滑动窗口时的滑动步长。
max_query_len (REQUIRED): int类型。query的最大长度。
max_answer_len (REQUIRED): int类型。预测阶段answer的最大长度,不训练时该字段可为空。
n_best_size (OPTIONAL): int类型。预测阶段合并滑动窗口的样本时,每个样本所取的n_best列表大小。
```
reader的输出(生成器每次yield出的数据)包含以下字段:
```yaml
token_ids: 一个shape为[batch_size, seq_len]的矩阵,每行是一条样本(文本对),文本1为context,文本2为question,其中的每个元素为文本对中的每个token对应的单词id,文本对使用`[SEP]`所对应的id隔开。
position_ids: 一个shape为[batch_size, seq_len]的矩阵,每行是一条样本,其中的每个元素为文本中的每个token对应的位置id。
segment_ids: 一个shape为[batch_size, seq_len]的矩阵,在文本1的token位置,元素取值为0;在文本2的token位置,元素取值为1。用于支持BERT、ERNIE等模型的输入。
input_mask: 一个shape为[batch_size, seq_len]的矩阵,其中的每个元素为0或1,表示该位置是否是padding词(为1时代表是真实词,为0时代表是填充词)。
task_ids: 一个shape为[batch_size, seq_len]的全0矩阵,用于支持ERNIE模型的输入。
start_positions: 一个shape为[batch_size]的向量,每个元素代表当前样本的答案片段的起始位置。
end_positions: 一个shape为[batch_size]的向量,每个元素代表当前样本的答案片段的结束位置。
```
当处于预测阶段时,reader所yield出的数据不会包含`label_ids`字段,但会额外的包含`unique_ids`字段:
```yaml
unique_ids: 一个shape为[batch_size, seq_len]的矩阵,代表每个样本的全局唯一的id,用于预测后对滑动窗口的结果进行合并。
```
#### 掩码语言模型数据集reader工具:mlm
该reader完成掩码语言模型数据集的载入与处理,reader接受[tsv格式](https://en.wikipedia.org/wiki/Tab-separated_values)的数据集输入,MLM任务为自监督任务,数据集仅包含一列`text_a`,reader会自动为每个样本生成随机的训练标签。格式如下
```
text_a
Subsequent to these developments, Randall Collins (2004) formulated his interaction ritual theory by drawing on Durkheim's work on totemic rituals that was extended by Goffman (1964/2013; 1967) into everyday focused encounters.
Presidential spokesman Abigail Valte earlier Saturday urged residents of low-lying and mountainous areas that could be hit hard by the storm to evacuate, the state news agency said, citing an interview conducted on a government radio station. World Vision, the Christian humanitarian organization, said Saturday that it had to postpone some of its relief efforts due to Nalgae, with two of three emergency teams set to deploy once the storm passes. Another team is in Bulcan province, most of which is "still submerged" because of Nesat. The group is focusing its post-Nesat efforts on two communities in Manila and three in the northern Isabela and Zambales provinces.
of the Year award in 2013, becoming the first woman in thirty years, and the first African American person ever to ever win the award. After an extensive career with the California State Legislature she began working for PicoBrew, a product development company in Seattle, WA that specializes in automated brewing equipment.
the gakkel ridge is a boundary between which two tectonic plates Mid-Atlantic Ridge ( MAR ) is a mid-ocean ridge , a divergent tectonic plate or constructive plate boundary located along the floor of the Atlantic Ocean , and part of the longest mountain range in the world . The ridge extends from a junction with the Gakkel Ridge ( Mid-Arctic Ridge ) northeast of Greenland southward to the Bouvet Triple Junction in the South Atlantic .
```
***注意:数据集的第一列必须为header,即标注每一列的列名***
reader的输出(生成器每次yield出的数据)包含以下对象:
```yaml
token_ids: 一个shape为[batch_size, seq_len]的矩阵,每行是一条样本,其中的每个元素为文本中的每个token对应的单词id。
position_ids: 一个shape为[batch_size, seq_len]的矩阵,每行是一条样本,其中的每个元素为文本中的每个token对应的位置id。
segment_ids: 一个shape为[batch_size, seq_len]的全0矩阵,用于支持BERT、ERNIE等模型的输入。
input_mask: 一个shape为[batch_size, seq_len]的矩阵,其中的每个元素为0或1,表示该位置是否是padding词(为1时代表是真实词,为0时代表是填充词)。
mask_label: 一个shape为[None]的向量,其中的每个元素为被mask掉的单词的真实单词id。
mask_pos: 一个shape为[None]的向量,长度与`mask_pos`一致且元素一一对应。每个元素表示被mask掉的单词的位置。
task_ids: 一个shape为[batch_size, seq_len]的全0矩阵,用于支持ERNIE模型的输入。
```
## 附录B:内置主干网络(backbone)
框架中内置了BERT和ERNIE作为主干网络,未来框架会引入更多的骨干网络如XLNet等。
#### BERT
BERT包含了如下输入对象
```yaml
token_ids: 一个shape为[batch_size, seq_len]的矩阵,每行是一条样本,其中的每个元素为文本中的每个token对应的单词id。
position_ids: 一个shape为[batch_size, seq_len]的矩阵,每行是一条样本,其中的每个元素为文本中的每个token对应的位置id。
segment_ids: 一个shape为[batch_size, seq_len]的0/1矩阵,用于支持BERT、ERNIE等模型的输入,当元素为0时,代表当前token属于分类任务或匹配任务的text1,为1时代表当前token属于匹配任务的text2.
input_mask: 一个shape为[batch_size, seq_len]的矩阵,其中的每个元素为0或1,表示该位置是否是padding词(为1时代表是真实词,为0时代表是填充词)。
```
提供了如下输出对象供下游组件使用。
```yaml
word_embedding: 一个shape为[batch_size, seq_len, emb_size]的张量(Tensor),float32类型。表示当前batch中各个样本的(上下文无关)词向量序列。
embedding_table: 一个shape为[vocab_size, emb_size]的矩阵,float32类型。表示BERT当前维护的词向量查找表矩阵。
encoder_outputs: 一个shape为[batch_size, seq_len, hidden_size]的Tensor, float32类型。表示BERT encoder对当前batch中各个样本的encoding结果。
sentence_embedding: 一个shape为[batch_size, hidden_size]的matrix, float32类型。每一行代表BERT encoder对当前batch中相应样本的句子向量(sentence embedding)
sentence_pair_embedding: 一个shape为[batch_size, hidden_size]的matrix, float32类型。每一行代表BERT encoder对当前batch中相应样本的句子向量(sentence embedding)
```
#### ERNIE
ERNIE包含了如下输入对象
```yaml
token_ids: 。一个shape为[batch_size, seq_len]的矩阵,每行是一条样本,其中的每个元素为文本中的每个token对应的单词id。
position_ids: 一个shape为[batch_size, seq_len]的矩阵,每行是一条样本,其中的每个元素为文本中的每个token对应的位置id。
segment_ids: 一个shape为[batch_size, seq_len]的0/1矩阵,用于支持BERT、ERNIE等模型的输入,当元素为0时,代表当前token属于分类任务或匹配任务的text1,为1时代表当前token属于匹配任务的text2.
input_mask: 一个shape为[batch_size, seq_len]的矩阵,其中的每个元素为0或1,表示该位置是否是padding词(为1时代表是真实词,为0时代表是填充词)。
segment_ids: 一个shape为[batch_size, seq_len]的全0矩阵,用于支持BERT、ERNIE等模型的输入。
task_ids: 一个shape为[batch_size, seq_len]的全0矩阵,用于支持ERNIE finetuning。
```
提供了如下输出对象供下游组件使用。
```yaml
word_embedding: 一个shape为[batch_size, seq_len, emb_size]的张量(Tensor),float32类型。表示当前batch中各个样本的(上下文无关)词向量序列。
embedding_table: 一个shape为[vocab_size, emb_size]的矩阵,float32类型。表示BERT当前维护的词向量查找表矩阵。
encoder_outputs: 一个shape为[batch_size, seq_len, hidden_size]的Tensor, float32类型。表示BERT encoder对当前batch中各个样本的encoding结果。
sentence_embedding: 一个shape为[batch_size, hidden_size]的matrix, float32类型。每一行代表BERT encoder对当前batch中相应样本的句子向量(sentence embedding)
sentence_pair_embedding: 一个shape为[batch_size, hidden_size]的matrix, float32类型。每一行代表BERT encoder对当前batch中相应样本的句子向量(sentence embedding)
```
## 附录C:内置任务范式(paradigm)
#### 分类范式:cls 1. use `paddlepalm.reader` to create a *reader* for dataset loading and input features generation, then call `reader.load_data` method to load your training data.
2. use `paddlepalm.backbone` to create a model *backbone* to extract text features (e.g., contextual word embedding, sentence embedding).
3. register your *reader* with your *backbone* through `reader.register_with` method. After this step, your reader is able to yield input features used by backbone.
4. use `paddlepalm.head` to create a task output *head*. This head can provide task loss for training and predicting results for model inference.
5. create a task *trainer* with `paddlepalm.Trainer`, then build forward graph with backbone and task head (created in step 2 and 4) through `trainer.build_forward`.
6. use `paddlepalm.optimizer` (and `paddlepalm.lr_sched` if is necessary) to create a *optimizer*, then build backward through `trainer.build_backward`.
7. fit prepared reader and data (achieved in step 1) to trainer with `trainer.fit_reader` method.
8. load pretrain model with `trainer.load_pretrain`, or load checkpoint with `trainer.load_ckpt` or nothing to do for training from scratch, then do training with `trainer.train`.
分类范式额外包含以下配置字段: More implementation details see following demos: [Sentiment Classification](), [Quora Question Pairs matching](), [Tagging](), [SQuAD machine Reading Comprehension]().
```yaml To save models/checkpoints during training, just call `trainer.set_saver` method. More implementation details see [this]().
n_classes(REQUIRED): int类型。分类任务的类别数。
pred_output_path (OPTIONAL) : str类型。预测输出结果的保存路径,当该参数未空时,保存至全局配置文件中的`save_path`字段指定路径下的任务目录。
```
分类范式包含如下的输入对象:
训练阶段:
```yaml
sentence_embedding: 一个shape为[batch_size, hidden_size]的matrix, float32类型。每一行代表BERT encoder对当前batch中相应样本的句子向量(sentence embedding)
label_ids: 一个shape为[batch_size]的矩阵,其中的每个元素为该样本的类别标签。
```
预测阶段:
```yaml
sentence_embedding: 一个shape为[batch_size, hidden_size]的matrix, float32类型。每一行代表BERT encoder对当前batch中相应样本的句子向量(sentence embedding)
```
在训练阶段,输出loss;预测阶段输出各个类别的预测概率。
#### 匹配范式:match
To do predict/evaluation after a training stage, just create another three reader, backbone and head instance with `phase='predict'` (repeat step 1~4 above). Then do predicting with `predict` method in trainer (no need to create another trainer). More implementation details see [this]().
匹配范式额外包含以下配置字段: To run with multi-task learning mode:
```yaml
pred_output_path (OPTIONAL) : str类型。预测输出结果的保存路径,当该参数未空时,保存至全局配置文件中的`save_path`字段指定路径下的任务目录。
```
匹配范式包含如下的输入对象: 1. repeatedly create components (i.e., reader, backbone and head) for each task followed with step 1~5 above.
2. create empty trainers (each trainer is corresponded to one task) and pass them to create a `MultiHeadTrainer`.
3. build multi-task forward graph with `multi_head_trainer.build_forward` method.
4. use `paddlepalm.optimizer` (and `paddlepalm.lr_sched` if is necessary) to create a *optimizer*, then build backward through `multi_head_trainer.build_backward`.
5. fit all prepared readers and data to multi_head_trainer with `multi_head_trainer.fit_readers` method.
6. randomly initialize model parameters with `multi_head_trainer.random_init_params` (and `multi_head_trainer.load_pretrain` if needed), then do training with `multi_head_trainer.train`.
训练阶段: The save/load and predict operations of a multi_head_trainer is the same as a trainer.
```yaml
sentence_pair_embedding: 一个shape为[batch_size, hidden_size]的matrix, float32类型。每一行代表BERT encoder对当前batch中相应样本的句子向量(sentence embedding)
label_ids: 一个shape为[batch_size]的矩阵,其中的每个元素为该样本的类别标签,为0时表示两段文本不匹配,为1时代表构成匹配
```
预测阶段:
```yaml
sentence_pair_embedding: 一个shape为[batch_size, hidden_size]的matrix, float32类型。每一行代表BERT encoder对当前batch中相应样本的句子向量(sentence embedding)
```
在训练阶段,输出loss;预测阶段输出匹配与否的概率分布。
#### 机器阅读理解范式:mrc
分类范式额外包含以下配置字段:
```yaml
max_answer_len(REQUIRED): int类型。预测的最大答案长度
n_best_size (OPTIONAL) : int类型,默认为20。预测时保存的nbest回答文件中每条样本的n_best数量
pred_output_path (OPTIONAL) : str类型。预测输出结果的保存路径,当该参数未空时,保存至全局配置文件中的`save_path`字段指定路径下的任务目录
```
机器阅读理解范式包含如下的输入对象: More implementation details of running multi-task learning with multi_head_trainer can be found [here]().
训练阶段:
```yaml
encoder_outputs: 一个shape为[batch_size, seq_len, hidden_size]的Tensor, float32类型。表示BERT encoder对当前batch中各个样本的encoding结果。
start_positions: 一个shape为[batch_size]的向量,每个元素代表当前样本的答案片段的起始位置。
end_positions: 一个shape为[batch_size]的向量,每个元素代表当前样本的答案片段的结束位置。
```
预测阶段:
```yaml
encoder_outputs: 一个shape为[batch_size, seq_len, hidden_size]的Tensor, float32类型。表示BERT encoder对当前batch中各个样本的encoding结果。
unique_ids: 一个shape为[batch_size, seq_len]的矩阵,代表每个样本的全局唯一的id,用于预测后对滑动窗口的结果进行合并。
```
#### 掩码语言模型范式:mlm
该任务范式为无监督任务范式,不支持预测,仅用于(辅助)训练。包含如下的输入对象:
```yaml
mask_label: 一个shape为[None]的向量,其中的每个元素为被mask掉的单词的真实单词id。
mask_pos": 一个shape为[None]的向量,长度与`mask_pos`一致且元素一一对应。每个元素表示被mask掉的单词的位置。
embedding_table: 一个shape为[vocab_size, emb_size]的矩阵,float32类型。表示BERT当前维护的词向量查找表矩阵。
encoder_outputs: 一个shape为[batch_size, seq_len, hidden_size]的Tensor, float32类型。表示BERT encoder对当前batch中各个样本的encoding结果。
```
## 附录D:可配置的全局参数列表
```yaml
task_instance(REQUIRED): str类型。需要进行训练或预测的任务实例名。在多任务模式下,多个任务之间使用逗号`,`隔开。名称选取自任务实例配置文件的文件名(不包含后缀.yaml)。
mix_ratio (OPTIONAL): str类型。每个任务的训练阶段的采样概率,各个值通过逗号`,`隔开,且与task_instance中的元素一一对应。默认每个任务的采样概率均为1.0,即所有任务等概率采样(代表与主任务采样次数的期望相同)。详情见 《进阶篇-训练终止条件与预期训练步数》。
target_tag (OPTIONAL): str类型。目标/辅助任务标志位,各个值通过逗号`,`隔开,且与task_instance中的元素一一对应。标记为1的任务代表目标任务,标记为0的任务代表辅助任务。默认每个值均为1(即默认每个任务为目标任务)。相关使用示例见DEMO2。
task_reuse_tag (OPTIONAL): str类型。任务层复用标志位,各个值通过逗号`,`隔开,且与task_instance中的元素一一对应。元素取值相同的任务会自动共享任务层参数,取值不同的任务不复用任务层参数。相关使用示例见DEMO3。
backbone(REQUIRED): str类型。主干网络名。
backbone_config_path (OPTIONAL): str类型。主干网络配置文件路径。
save_path(REQUIRED): str类型。checkpoint文件和各个目标任务的预测模型保存路径。
vocab_path(REQUIRED): str类型。字典文件,纯文本格式存储,其中每行为一个单词,供reader、backbone和各个任务使用。
do_lower_case (OPTIONAL): bool类型。大小写标志位。默认为False,即区分大小写。
for_cn: bool类型。中文模式标志位。默认为False,即默认输入为英文,设置为True后,分词器、后处理等按照中文语言进行处理。
print_every_n_steps (OPTIONAL): int类型。默认为5。训练阶段打印日志的频率(step为单位)。
save_every_n_steps (OPTIONAL): int类型。默认为-1。训练过程中保存checkpoint模型的频率,默认不保存。
optimizer(REQUIRED): str类型。优化器名称,目前框架只支持adam,未来会支持更多优化器。
learning_rate(REQUIRED): str类型。训练阶段的学习率。
batch_size(REQUIRED): int类型。批大小,即每个训练或推理step所使用样本数。
epoch(REQUIRED): int类型。主任务的训练epoch数。
use_gpu (OPTIONAL): bool类型。默认为True。框架默认使用GPU进行单机单卡或分布式训练,若希望使用cpu训练或推理,可将该标志位置为False。
warmup_proportion (OPTIONAL): float类型。默认为0。对预训练模型finetuning时的warmup的训练step占预估的全部训练步数的比例。
use_ema (OPTIONAL): bool类型。默认为False。是否开启[ema](https://en.wikipedia.org/wiki/Moving_average#Exponential_moving_average) 进行训练和推理。
ema_decay (OPTIONAL): float类型。默认为0。开启ema时的权重衰减指数。
random_seed (OPTIONAL): int类型。随机种子,默认1。
```
## License ## License
......
# PaddlePALM
PaddlePALM (Paddle for Multi-task) 是一个强大快速、灵活易用的NLP大规模多任务学习框架。通过PaddlePALM,用户可以轻松完成复杂的多任务学习与参数复用,无缝集成「**单任务训练**」、「**多任务辅助训练**」和「**多目标任务联合训练**」这 *3* 种训练方式和灵活的保存与预测机制,且仅需书写极少量代码即可”一键启动”高性能单机单卡和分布式训练与推理。
框架中内置了丰富的[主干网络](#附录b内置主干网络backbone)及其[预训练模型](#预训练模型)(BERT、ERNIE等)、常见的[任务范式](#附录c内置任务范式paradigm)(分类、匹配、机器阅读理解等)和相应的[数据集读取与处理工具](#附录a内置数据集载入与处理工具reader)。同时框架提供了用户自定义接口,若内置工具、主干网络和任务无法满足需求,开发者可以轻松完成相关组件的自定义。各个组件均为零耦合设计,用户仅需完成组件本身的特性开发即可完成与框架的融合。
## 目录
- [安装](#安装)
- [前期准备](#前期准备)
- [理论准备](#理论准备)
- [框架原理](#框架原理)
- [预训练模型](#预训练模型)
- [三个DEMO入门PaddlePALM](#三个demo入门paddlepalm)
- [DEMO1:单任务训练](#demo1单任务训练)
- [DEMO2:多任务辅助训练与目标任务预测](#demo2多任务辅助训练与目标任务预测)
- [DEMO3:多目标任务联合训练与任务层参数复用](#demo3多目标任务联合训练与任务层参数复用)
- [进阶篇](#进阶篇)
- [配置广播机制](#配置广播机制)
- [reader、backbone与paradigm的选择](#readerbackbone与paradigm的选择)
- [多目标任务下的训练终止条件与预期训练步数](#多目标任务下的训练终止条件与预期训练步数)
- [多个目标任务](#多个目标任务)
- [训练终止条件](#训练终止条件)
- [任务采样概率与预期训练步数](#任务采样概率与预期训练步数)
- [多个目标任务时预期训练步数的计算](#多个目标任务时预期训练步数的计算)
- [模型保存与预测机制](#模型保存与预测机制)
- [分布式训练](#分布式训练)
- [附录A:内置数据集载入与处理工具(reader)](#附录a内置数据集载入与处理工具reader)
- [附录B:内置主干网络(backbone)](#附录b内置主干网络backbone)
- [附录C:内置任务范式(paradigm)](#附录c内置任务范式paradigm)
- [附录D:可配置的全局参数列表](#附录d可配置的全局参数列表)
## 安装
推荐使用pip安装paddlepalm框架:
```shell
pip install paddlepalm
```
对于离线机器,可以使用基于源码的安装方式:
```shell
git clone https://github.com/PaddlePaddle/PALM.git
cd PALM && python setup.py install
```
**环境依赖**
- Python >= 2.7
- cuda >= 9.0
- cudnn >= 7.0
- PaddlePaddle >= 1.6.1 (请参考[安装指南](http://www.paddlepaddle.org/#quick-start)进行安装)
## 框架代码结构
```text
.
├── mtl_controller.py # 任务控制器,负责创建和调度各个任务实例来完成多任务学习
├── task_instance.py # 任务实例类,完成任务实例的配置管理、训练进程管理、保存与载入等
├── default_settings.py # 默认的环境变量和框架配置
├── utils # 框架核心工具集
│ ├── config_helper.py # 配置工具类,完成命令行与json、yaml的联合解析
│ ├── reader_helper.py # 完成多任务数据集iterators的合并、采样、调度和归一化,连接python生成器与计算图
│ ├── saver.py # 模型保存与载入
│ ├── print_helper.py # 日志打印规范化工具
│ ├── plot_helper.py # 命令行绘图工具
│ └── textprocess_helper.py # 文本数据处理工具函数
├── backbone # 框架预置的主干网络
│ ├── ernie.py # ERNIE模型
│ ├── bert.py # BERT模型
│ └── utils # 实现主干网络的一些可复用的工具函数
├── reader # 框架内置的数据集载入与处理工具
│ ├── cls.py # 文本分类数据集工具
│ ├── match.py # 文本匹配数据集工具
│ ├── mrc.py # 机器阅读理解数据集工具
│ └── mlm.py # 掩码语言模型(mask language model)数据集生成与处理工具
└── paradigm # 任务范式
├── cls.py # 文本分类
├── match.py # 文本匹配
├── mrc.py # 机器阅读理解
└── mlm.py # 掩码语言模型(mask language model)
```
## 前期准备
### 理论准备
框架默认采用一对多(One-to-Many)的参数共享方式,如图
![image-20191022194400259](https://tva1.sinaimg.cn/large/006y8mN6ly1g88ajvpqmgj31hu07wn5s.jpg)
例如我们有一个目标任务MRC和两个辅助任务MLM和MATCH,我们希望通过MLM和MATCH来提高目标任务MRC的测试集表现(即提升模型泛化能力),那么我们可以令三个任务共享相同的文本特征抽取模型(例如BERT、ERNIE等),然后分别为每个任务定义输出层,计算各自的loss值。
框架默认采用任务采样+mini-batch采样的方式(alternating mini-batches optimization)进行模型训练,即对于每个训练step,首先对候选任务进行采样(采样权重取决于用户设置的`mix_ratio`),而后从该任务的训练集中采样出一个mini-batch(采样出的样本数取决于用户设置的`batch_size`)。
### 框架原理
paddlepalm框架的运行原理图如图所示
![PALM原理图](https://tva1.sinaimg.cn/large/006y8mN6ly1g8j1isf3fcj31ne0tyqbd.jpg)
首先用户为数据集载入与处理、主干网络和任务编写配置文件(框架实现了灵活易用的[配置广播机制](#配置广播机制)),而后用其创建多任务学习的控制器(`Controller`)。进而控制器创建任务实例,并根据用户调用的训练和预测接口对其参数和各个任务实例进行管理和调度。下面我们通过三个DEMO和进阶篇来快速入门并更深入的理解paddlepalm。
### 预训练模型
#### 下载
我们提供了BERT、ERNIE等主干网络的相关预训练模型。为了加速模型收敛,获得更佳的测试集表现,我们强烈建议用户在多任务学习时尽量在预训练模型的基础上进行(而不是从参数随机初始化开始)。用户可通过运行`script/download_pretrain_models <model_name>`下载需要的预训练模型,例如,下载预训练BERT模型(uncased large)的命令如下
```shell
bash script/download_pretrain_backbone.sh bert
```
脚本会自动在**当前文件夹**中创建一个pretrain_model目录(注:运行DEMO时,需保证pretrain_model文件夹在PALM项目目录下),并在其中创建bert子目录,里面存放预训练模型(`params`文件夹内)、相关的网络参数(`bert_config.json`)和字典(`vocab.txt`)。除了BERT模型,脚本还提供了ERNIE预训练模型(uncased large)的一键下载,将`<model_name>`改成`ernie`即可。全部可用的预训练模型列表见[paddlenlp/lark](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleLARK)
#### 转换
注意,预训练模型不能直接被框架使用。我们提供了转换脚本可以将其转换成paddlepalm的模型格式。如下,通过运行`script/convert_params.sh`可将预训练模型bert转换成框架的模型格式。
```shell
bash script/convert_params.sh pretrain_model/bert/params
```
注意,以下恢复操作在执行后述DEMO流程中**无需执行**。
若用户需将转换成的paddlepalm模型恢复为原始的预训练模型,可以运行`script/recover_params.sh`进行恢复。
```shell
bash script/recover_params.sh pretrain_model/bert/params
```
## 三个DEMO入门PaddlePALM
### DEMO1:单任务训练
框架支持对任何一个内置任务进行传统的单任务训练。接下来我们启动一个复杂的机器阅读理解任务的训练,我们在`data/mrqa`文件夹中提供了[EMNLP2019 MRQA机器阅读理解评测](https://mrqa.github.io/shared)的部分比赛数据。下面我们利用该数据尝试完成一个基于BERT的机器阅读理解任务MRQA的单任务学习。
用户可通过运行如下脚本一键开始本节任务的训练
```shell
bash run_demo1.sh
```
下面以该任务为例,讲解如何基于paddlepalm框架轻松实现该任务。
**1. 配置任务实例**
首先,我们编写该任务实例的配置文件`mrqa.yaml`,若该任务实例参与训练或预测,则框架将自动解析该配置文件并创建相应的任务实例。配置文件需符合yaml格式的要求。一个任务实例的配置文件最少应包含`train_file`,`reader`和`paradigm`这三个字段,分别代表训练集的文件路径`train_file`、使用的数据集载入与处理工具`reader`、任务范式`paradigm`。
```yaml
train_file: data/mrqa/train.json
reader: mrc
paradigm: mrc
```
*注:框架内置的其他数据集载入与处理工具见[这里](#附录a内置数据集载入与处理工具reader),任务范式列表见[这里](#附录c内置任务范式paradigm)*
此外,我们还需要配置reader的预处理规则,各个预置reader支持的预处理配置和规则请参考[这里](#附录a内置数据集载入与处理工具reader)。预处理规则同样直接写入`mrqa.yaml`中。
```yaml
max_seq_len: 512
max_query_len: 64
doc_stride: 128 # 在MRQA数据集中,存在较长的文档,因此我们这里使用滑动窗口处理样本,滑动步长设置为128
do_lower_case: True
vocab_path: "pretrain_model/bert/vocab.txt"
```
更详细的任务实例配置方法(为任务实例选择合适的reader、paradigm和backbone)可参考[这里](#readerbackbone与paradigm的选择)
**2.配置backbone和训练规则**
然后我们编写全局配置文件`config_demo1.yaml`。在这里可以完成对主干网络(backbone)、多任务学习规则以及[广播到任务实例](#配置广播机制)的配置。同样使用yaml格式描述,例如在这里我们可以配置一下需要学习的任务`task_instance`、模型的保存路径`save_path`、基于的主干网络`backbone`、优化器`optimizer`等。
```yaml
task_instance: "mrqa"
save_path: "output_model/firstrun"
backbone: "bert"
backbone_config_path: "pretrain_model/bert/bert_config.json"
optimizer: "adam"
learning_rate: 3e-5
batch_size: 4
num_epochs: 2
warmup_proportion: 0.1
```
这里的task_instance即填写我们刚刚编写的任务实例配置文件的文件名`mrqa`**(注意不要包括.yaml后缀!)**。框架启动多任务学习后会根据`task_instance`中指定的任务实例来寻找相关配置文件,并创建任务实例。
此外,backbone的相关配置除了可以直接写入全局配置文件以外,还可以在额外的一个json文件中进行描述,并在全局配置文件中通过`backbone_config_path`进行该配置文件路径的指定。
*注:框架支持的其他内置全局参数见[这里](#附录d可配置的全局参数列表)*
**3.开始训练**
下面我们开始尝试启动MRQA任务的训练(该代码位于`demo1.py`中)。如[框架原理](#框架原理)所述,框架的核心组件是`Controller`,负责多任务学习的启动。
```python
# Demo 1: single task training of MRQA
import paddlepalm as palm
if __name__ == '__main__':
controller = palm.Controller('config_demo1.yaml', task_dir='demo1_tasks')
controller.load_pretrain('pretrain_model/bert/params')
controller.train()
```
训练日志如下,可以看到loss值随着训练收敛。在训练结束后,`Controller`自动为mrqa任务保存预测模型。
```
Global step: 10. Task: mrqa, step 10/135 (epoch 0), loss: 5.928, speed: 0.67 steps/s
Global step: 20. Task: mrqa, step 20/135 (epoch 0), loss: 4.594, speed: 0.75 steps/s
Global step: 30. Task: mrqa, step 30/135 (epoch 0), loss: 1.663, speed: 0.75 steps/s
...
Global step: 250. Task: mrqa, step 115/135 (epoch 1), loss: 1.391, speed: 0.75 steps/s
Global step: 260. Task: mrqa, step 125/135 (epoch 1), loss: 1.871, speed: 0.75 steps/s
Global step: 270. Task: mrqa, step 135/135 (epoch 1), loss: 1.544, speed: 0.75 steps/s
mrqa: train finished!
mrqa: inference model saved at output_model/firstrun/mrqa/infer_model
```
### DEMO2:多任务辅助训练与目标任务预测
本节我们考虑更加复杂的学习目标,我们引入一个掩码语言模型(Mask Language Model,MLM)问答匹配(QA Match)任务来辅助上一节MRQA任务的训练,相关训练数据分别位于`data/mlm4mrqa`和`data/match4mrqa`。并且我们这里换用ERNIE模型作为主干网络,来获得更佳的效果。在多任务训练结束后,我们使用训练好的模型来对MRQA任务的测试集进行预测。
用户可通过运行如下脚本直接开始本节任务的训练
```shell
bash run_demo2.sh
```
下面以该任务为例,讲解如何基于paddlepalm框架轻松实现这个复杂的多任务学习。
**1. 配置任务实例**
首先,我们像上一节一样为MLM任务和Matching任务分别创建任务实例的配置文件`mlm4mrqa.yaml`和`match4mrqa.yaml`:
```yaml
----- mlm4mrqa.yaml -----
train_file: "data/mlm4mrqa/train.tsv"
reader: mlm
paradigm: mlm
----- match4mrqa.yaml -----
train_file: "data/match/train.tsv"
reader: match
paradigm: match
```
由于我们在训练结束后要对MRQA任务的测试集进行预测,因此我们要在之前写好的`mrqa.yaml`中追加预测相关的配置
```yaml
pred_file: data/mrqa/dev.json
pred_output_path: 'mrqa_output'
max_answer_len: 30
n_best_size: 20
```
**2.配置全局参数**
由于MRQA、MLM和Matching任务有相同的字典、大小写配置、截断长度等,因此我们可以将这些各个任务中相同的参数写入到全局配置文件`mtl_config.yaml`中,**框架会自动将该文件中的配置广播(broadcast)到各个任务实例。**
```yaml
task_instance: "mrqa, mlm4mrqa, match4mrqa"
target_tag: 1,0,0
save_path: "output_model/secondrun"
backbone: "ernie"
backbone_config_path: "pretrain_model/ernie/ernie_config.json"
vocab_path: "pretrain_model/ernie/vocab.txt"
do_lower_case: True
max_seq_len: 512 # 写入全局配置文件的参数会被自动广播到各个任务实例
batch_size: 4
num_epochs: 2
optimizer: "adam"
learning_rate: 3e-5
warmup_proportion: 0.1
weight_decay: 0.1
```
这里我们可以使用`target_tag`来标记目标任务和辅助任务,各个任务的tag使用逗号`,`隔开。target_tag与task_instance中的元素一一对应,当某任务的tag设置为1时,表示对应的任务被设置为目标任务;设置为0时,表示对应的任务被设置为辅助任务,默认情况下所以任务均被设置为目标任务(即默认`target_tag`为全1)。
辅助任务不会保存预测模型,且不会影响训练的终止,仅仅起到“陪同训练”的作用以期提高模型的泛化能力。当所有的目标任务达到预期的训练步数后多任务学习终止,框架自动为每个目标任务保存预测模型(inference model)到设置的`save_path`位置。
同时需要注意的是,这里`num_epochs`指代目标任务`mrqa`的训练epoch数量(训练集遍历次数)。
在训练过程中,默认每个训练step会从各个任务等概率采样,来决定当前step训练哪个任务。但包括辅助任务在内,各个任务的采样概率是可以被控制的。若用户希望改变采样比率,可以通过`mix_ratio`字段来进行设置,例如
```yaml
mix_ratio: 1.0, 0.5, 0.5
```
若将如上设置加入到全局配置文件中,则辅助任务`mlm4mrqa`和`match4mrqa`的采样概率/预估的训练步数仅为`mrqa`任务的一半。关于采样概率的更多介绍请参考进阶篇。
**3.开始多任务训练**
```python
import paddlepalm as palm
if __name__ == '__main__':
controller = palm.Controller('config_demo2.yaml', task_dir='demo2_tasks')
controller.load_pretrain('pretrain_model/ernie/params')
controller.train()
```
训练日志如下,在训练过程中可以看到每个任务的loss下降
```
Global step: 10. Task: mrqa, step 4/135 (epoch 0), loss: 6.235, speed: 0.75 steps/s
Global step: 20. Task: mrqa, step 8/135 (epoch 0), loss: 5.652, speed: 0.75 steps/s
Global step: 30. Task: mrqa, step 13/135 (epoch 0), loss: 6.031, speed: 0.75 steps/s
Global step: 40. Task: match4mrqa, step 13/25 (epoch 0), loss: 0.758, speed: 2.52 steps/s
Global step: 50. Task: mlm4mrqa, step 14/30 (epoch 0), loss: 7.322, speed: 3.24 steps/s
...
Global step: 547. Task: match4mrqa, step 13/25 (epoch 5), loss: 0.400, speed: 2.23 steps/s
Global step: 548. Task: match4mrqa, step 14/25 (epoch 5), loss: 0.121, speed: 3.03 steps/s
Global step: 549. Task: mrqa, step 134/135 (epoch 1), loss: 0.824, speed: 0.75 steps/s
Global step: 550. Task: mlm4mrqa, step 22/30 (epoch 4), loss: 6.903, speed: 3.59 steps/s
Global step: 551. Task: mrqa, step 135/135 (epoch 1), loss: 3.408, speed: 0.75 steps/s
mrqa: train finished!
mrqa: inference model saved at output_model/secondrun/mrqa/infer_model
```
**4.预测**
在得到目标任务的预测模型(inference_model)后,我们可以加载预测模型对该任务的测试集进行预测。在多任务训练阶段,在全局配置文件的`save_path`指定的路径下会为每个目标任务创建同名子目录,子目录中都有预测模型文件夹`infermodel`。我们可以将该路径传给框架的`controller`来完成对该目标任务的预测。
例如,我们在上一节得到了mrqa任务的预测模型。首先创建一个新的*Controller*,**并且创建时要将`for_train`标志位置为*False***。而后调用*pred*接口,将要预测的任务实例名字和预测模型的路径传入,即可完成相关预测。预测的结果默认保存在任务实例配置文件的`pred_output_path`指定的路径中。代码段如下:
```python
controller = palm.Controller(config='config_demo2.yaml', task_dir='demo2_tasks', for_train=False)
controller.pred('mrqa', inference_model_dir='output_model/secondrun/mrqa/infermodel')
```
我们可以在刚刚yaml文件中设置的`mrqa_output/`文件夹下的`predictions.json`文件中看到类似如下的预测结果
```json
{
"3f02f171c82e49828580007a71eefc31": "Ethan Allen",
"98d0b8ce19d1434abdb42aa01e83db61": "McDonald's",
"f0bc45a4dd7a4d8abf91a5e4fb25fe57": "Jesse James",
...
}
```
其中的每一行是测试集中的一个question对应的预测答案(其中的key为question的id,详情见mrc reader的说明文档)。
### DEMO3:多目标任务联合训练与任务层参数复用
本节我们考虑一个更加复杂的大规模多任务学习场景。假如手头有若干任务,其中每个任务都可能将来被用于预测(即均为目标任务),且鉴于这若干个任务之间存在一些相关性,我们希望将其中一部分任务的任务层参数也进行复用。分类数据集位于`data/cls4mrqa`内。
具体来说,例如我们有6个分类任务(CLS1 ~ CLS6),均为目标任务(每个任务的模型都希望未来拿来做预测和部署),且我们希望任务1,2,5的任务输出层共享同一份参数,任务3、4共享同一份参数,任务6自己一份参数,即希望对6个任务实现如图所示的参数复用关系。
![image2](https://tva1.sinaimg.cn/large/006y8mN6ly1g8issdoli5j31ow08ogxv.jpg)
如图,在同一个方框内的任务共享相同的任务层参数。
用户可通过运行如下脚本一键开始学习本节任务目标:
```shell
bash run_demo3.sh
```
**1. 配置任务实例**
为了演示方便,我们使用同一份数据集来创建6个分类的任务实例,分别命名为`cls1.yaml`, `cls2.yaml`, `cls3.yaml`, `cls4.yaml`, `cls5.yaml`, `cls6.yaml`。每个实例的配置文件中填入如下必要字段
```yaml
train_file: "data/cls4mrqa/train.tsv"
reader: cls
paradigm: cls
n_classes: 4
```
**2.配置全局参数**
在paddlepalm中可以轻松完成上述的复杂复用关系的定义,我们使用`task_reuse_tag`来描述任务层的参数复用关系,与`target_tag`一样,`task_reuse_tag`中的元素与`task_instance`一一对应,元素取值相同的任务会自动共享任务层参数,取值不同的任务不复用任务层参数。因此可以在全局配置文件中如下描述
```yaml
task_instance: "cls1, cls2, cls3, cls4, cls5, cls6"
task_reuse_tag: 0, 0, 1, 1, 0, 2
```
同时,这6个任务均为目标任务,因此我们不需要手动设置`target_tag`了(任务默认即为目标任务)。不过,**设置多个目标的情况下,依然可以添加辅助任务陪同这些目标任务进行训练**,这时候就需要引入`target_tag`来区分目标任务和辅助任务了。而后,我们在全局配置文件中写入其他必要的参数(backbone、优化器等)。
```yaml
save_path: "output_model/secondrun"
backbone: "ernie"
backbone_config_path: "pretrain_model/ernie/ernie_config.json"
vocab_path: "pretrain_model/ernie/vocab.txt"
do_lower_case: True
max_seq_len: 512 # 写入全局配置文件的参数会被自动广播到各个任务实例
batch_size: 4
num_epochs: 2
optimizer: "adam"
learning_rate: 3e-5
warmup_proportion: 0.1
weight_decay: 0.1
```
**3.开始多目标任务训练**
最后,我们像DEMO1和DEMO2一样创建`Controller`,实例化各个任务实例、载入预训练模型并启动多任务训练:
```yaml
import paddlepalm as palm
if __name__ == '__main__':
controller = palm.Controller('config_demo3.yaml', task_dir='demo3_tasks')
controller.load_pretrain('pretrain_model/ernie/params')
controller.train()
```
可以看到如下日志输出。
```
Global step: 1. Task: cls4, step 1/15 (epoch 0), loss: 1.344, speed: 0.50 steps/s
Global step: 10. Task: cls4, step 5/15 (epoch 0), loss: 1.398, speed: 2.19 steps/s
Global step: 20. Task: cls2, step 5/15 (epoch 0), loss: 1.260, speed: 2.64 steps/s
cls4: train finished!
cls4: inference model saved at output_model/thirdrun/infer_model
cls5: train finished!
cls5: inference model saved at output_model/thirdrun/infer_model
Global step: 30. Task: cls2, step 7/15 (epoch 0), loss: 0.961, speed: 0.04 steps/s
cls2: train finished!
cls2: inference model saved at output_model/thirdrun/infer_model
Global step: 40. Task: cls6, step 4/15 (epoch 0), loss: 1.412, speed: 2.74 steps/s
Global step: 50. Task: cls2, step 12/15 (epoch 0), loss: 1.011, speed: 2.19 steps/s
cls6: train finished!
cls6: inference model saved at output_model/thirdrun/infer_model
cls1: train finished!
cls1: inference model saved at output_model/thirdrun/infer_model
Global step: 60. Task: cls3, step 7/15 (epoch 0), loss: 1.363, speed: 2.72 steps/s
cls3: train finished!
cls3: inference model saved at output_model/thirdrun/infer_model
```
对本DEMO更深入的理解可以参考[多目标任务下的训练终止条件与预期训练步数](#多目标任务下的训练终止条件与预期训练步数)。
## 进阶篇
本章节更深入的对paddlepalm的使用方法展开介绍,并提供一些提高使用效率的小技巧。
### 配置广播机制
![PALM原理图](https://tva1.sinaimg.cn/large/006y8mN6ly1g8j1isf3fcj31ne0tyqbd.jpg)
要完成多任务学习,我们需要对主干网络、各个任务以及训练方式进行必要的配置,为此,框架实现了一套高效的配置广播机制。如上图,通过yaml语言可以描述主干网络和各个任务实例的相关配置,并存储于文件中。由于任务实例可能有多个,且部分超参数会同时被主干网络和任务实例用到,因此对于这些需要“重复配置”却取值相同的超参数,可以写入全局配置文件中,框架在解析全局配置文件时会自动将其“广播”给主干网络和各个任务实例。
此外,全局配置文件的优先级要高于主干网络和任务实例的配置文件,因此当某个超参数在全局配置文件的取值与其在其余位置的取值冲突时,框架以全局配置文件中的取值为准。
同时,为了方便进行大规模实验和超参数调优,凡是在**全局配置文件**中出现的超参数,均可以通过命令行进行控制,例如,对于如下全局配置文件
```yaml
...
learning_rate: 1e-3
batch_size: 32
...
```
我们可能希望通过命令行临时调整学习率`learning_rate`和批大小`batch_size`,因此我们在运行训练脚本时可以通过如下方式对其进行改变。
```shell
python demo3.py --learning_rate 1e-4 --batch_size 64
```
因此,各种配置方式的优先级如下
**命令行 > 全局配置文件 > 任务实例配置文件&主干网络配置文件**
### reader、backbone与paradigm的选择
reader、backbone和paradigm是实现各类任务的三大基础组件,其中reader为数据集载入与处理工具,将一定格式的输入数据集自动转换成确定的输出元素字典(如单词id序列,位置id序列等);backbone为主干网络,将来自reader的一部分输出转换为高阶抽象的输出元素字典(如词向量、句向量、编码器输出的上下文相关词向量等);paradigm为任务范式,将来自reader的一部分输出和backbone输出的对原始输入的高阶抽象转换为训练所需要的loss以及预测所需要的输出等。
框架对这三部分组件的实现基于一种解耦合的设计,每个组件都会包括对输入对象的描述inputs_attr(s)和对输出对象的描述outputs_attr,每个输入或输出对象都会包含名字(描述含义)、形状(tensor shape)和数值类型(data type)。例如,主干网络BERT的输入输出对象的声明如下
```python
@property
def inputs_attr(self):
return {"token_ids": [[None, None], 'int64'],
"position_ids": [[None, None], 'int64'],
"segment_ids": [[None, None], 'int64'],
"input_mask": [[None, None], 'float32']}
@property
def outputs_attr(self):
return {"word_embedding": [[None, None, self._emb_size], 'float32'],
"embedding_table": [[None, self._voc_size, self._emb_size], 'float32'],
"encoder_outputs": [[None, None, self._emb_size], 'float32'],
"sentence_embedding": [[None, self._emb_size], 'float32'],
"sentence_pair_embedding": [[None, self._emb_size], 'float32']}
```
其中`inputs_attr`描述了BERT的输入对象,包含`token_ids`, `position_ids`, `segment_ids`和`input_mask`,并且附带了它们的形状(None表示Tensor在该维度的大小可变)和数据类型。`outputs_attr`则描述了BERT模块能提供的输出对象,包含`word_embedding`, `embedding_table`, `encoder_outputs`等。
当用户创建任务实例时,只需要保证每个组件的输入对象是包含在上游组件的输出内的,那么这些组件就可以搭配在一起使用。其中,backbone的上游组件是reader,paradigm的上游组件同时包含reader和backbone。
### 多目标任务下的训练终止条件与预期训练步数
#### 多个目标任务
框架支持设定多个目标任务,当全局配置文件的`task_instance`字段指定超过一个任务实例时,**这多个任务实例默认均为目标任务(即`target_tag`字段被自动填充为全1)**。对于被设置成目标任务的任务实例,框架会为其计算预期的训练步数并在达到预期训练步数后为其保存预测模型。
当框架存在多个目标任务时,全局配置文件中的`num_epochs`(训练集遍历次数)仅会作用于第一个出现的目标任务,称为主任务(main task)。框架会根据主任务的训练步数来推理其他目标任务的预期训练步数(可通过`mix_ratio`控制,详情见下一节)。**注意,除了用来标记`num_epochs`的作用对象外,主任务与其他目标任务没有任何不同。**
*注意:在多目标任务训练时,依然可以使用辅助任务来提升所有目标任务的测试集表现,但是要注意使用target_tag为引入的辅助任务打上辅助标记「0」*
#### 训练终止条件
在训练开始前,`Controller`会为所有每个目标任务计算出预期的训练步数。当某个目标任务的完成预期的训练步数后,`Controller`保存该任务的预测模型,而后继续按照设定的各任务的采样概率进行多任务训练。当所有目标任务均达到预期的训练步数后,多任务学习终止。需要注意的是,`Controller`不会为辅助任务计算预期训练步数,也不会为其保存预测模型,其仅仅起到“陪同目标任务训练”的作用,不会影响到多任务学习的终止与否。
#### 任务采样概率与预期训练步数
此外,在默认情况下,每个训练step的各个任务被采样到的概率均等,若用户希望更改其中某些任务的采样概率(比如某些任务的训练集较小,希望减少对其采样的次数;或某些任务较难,希望被更多的训练),可以在全局配置文件中通过`mix_ratio`字段控制各个任务的采样概率。例如,我们有三个任务,其中mrqa任务为目标任务,其余为辅助任务,我们对其`mix_ratio`进行如下设定:
```yaml
task_instance: mrqa, match4mrqa, mlm4mrqa
mix_ratio: 1.0, 0.5, 0.5
```
上述设置表示`match4mrqa`和`mlm4mrqa`任务的期望被采样次数均为`mrqa`任务的一半。此时,在mrqa任务被设置为目标任务的情况下,若mrqa任务训练一个epoch要经历5000 steps,且全局配置文件中设置了num_epochs为2,则根据上述`mix_ratio`的设置,mrqa任务将被训练5000\*2\*1.0=10000个steps,而`match4mrqa`任务和`mlm4mrqa`任务都会被训练5000个steps**左右**。
> 注意:若match4mrqa, mlm4mrqa被设置为辅助任务,则实际训练步数可能略多或略少于5000个steps。对于目标任务,则是精确的5000 steps。
#### 多个目标任务时预期训练步数的计算
当存在多个目标任务时,`num_epochs`仅作用于**第一个设定的目标任务(称为“主任务(main task)”)**,而后根据`mix_ratio`的设定为其余目标任务和辅助任务计算出预期的训练步数。
### 模型保存与预测机制
`Controller`可以在训练过程中保存两类模型,一类称为检查点模型(checkpoint),一类为预测模型(inference model)。
检查点模型会描述当前训练时刻的网络全局状态,包括backbone、所有任务以及优化器的全局参数,局部参数,长期变量等,即完整的多任务学习计算图。检查点模型用于训练意外终止时的断点恢复,或分阶段的对相同的模型进行连续训练。对于检查点模型,`Controller`默认不进行保存,但是用户可以通过在全局配置文件中添加`save_every_n_steps`来控制检查点模型的保存频率,例如设置为5000,则表示每5000个全局训练steps就会保存一次检查点模型。检查点模型放置在全局配置文件中设置的`save_path`指定的路径下。
预测模型则描述的是某个任务的完整预测模型,该模型内不会包含其他任务的参数,也不会保存优化器、dropout层等推理阶段不需要的节点。在保存预测模型时,`Controller`会同时保存预测相关的必要配置,如预测模型的输入输出列表,在进行预测时,可以调用实例化后的`Controller`的预测接口`pred`直接对相关任务进行预测。关于预测的用法示例可以参加DEMO2。
### 分布式训练
框架将单机单卡训练与单机多卡训练进行了无缝集成。当环境内有多张可用的GPU显卡时,框架会自动将模型复制到多张卡上,并且对于每个step,每张卡都会计算`batch_size`个训练样本,框架会自动对多卡的梯度进行合并。例如,环境中存在8张显卡,且`batch_size`设置为32时,这时每个step的实际batch size为32\*8=256。
当用户在多卡环境下希望仅用一张卡进行训练时,可以通过改变环境变量[CUDA_VISIBLE_DEVICES](https://devblogs.nvidia.com/cuda-pro-tip-control-gpu-visibility-cuda_visible_devices/)来进行控制。
## 附录A:内置数据集载入与处理工具(reader)
所有的内置reader均同时支持中英文输入数据,**默认读取的数据为英文数据**,希望读入中文数据时,需在配置文件中设置
```yaml
for_cn: True
```
所有的内置reader,均支持以下字段
```yaml
vocab_path(REQUIRED): str类型。字典文件路径。
max_seq_len(REQUIRED): int类型。切词后的序列最大长度(即token ids的最大长度)。注意经过分词后,token ids的数量往往多于原始的单词数(e.g., 使用wordpiece tokenizer时)。
batch_size(REQUIRED): int类型。训练或预测时的批大小(每个step喂入神经网络的样本数)。
train_file(REQUIRED): str类型。训练集文件所在路径。仅进行预测时,该字段可不设置。
pred_file(REQUIRED): str类型。测试集文件所在路径。仅进行训练时,该字段可不设置。
do_lower_case(OPTIONAL): bool类型,默认为False。是否将大写英文字母转换成小写。
shuffle(OPTIONAL): bool类型,默认为True。训练阶段打乱数据集样本的标志位,当置为True时,对数据集的样本进行全局打乱。注意,该标志位的设置不会影响预测阶段(预测阶段不会shuffle数据集)。
seed(OPTIONAL): int类型,默认为。
pred_batch_size(OPTIONAL): int类型。预测阶段的批大小,当该参数未设置时,预测阶段的批大小取决于`batch_size`字段的值。
print_first_n(OPTIONAL): int类型。打印数据集的前n条样本和对应的reader输出,默认为0。
```
#### 文本分类数据集reader工具:cls
该reader完成文本分类数据集的载入与处理,reader接受[tsv格式](https://en.wikipedia.org/wiki/Tab-separated_values)的数据集输入,数据集应该包含两列,一列为样本标签`label`,一列为原始文本`text_a`。数据集范例可参考`data/cls4mrqa`中的数据集文件,格式形如
```
label text_a
1 when was the last time the san antonio spurs missed the playoffshave only missed the playoffs four times since entering the NBA
0 the creation of the federal reserve system was an attempt toReserve System ( also known as the Federal Reserve or simply the Fed ) is the central banking system of the United States of America .
2 group f / 64 was a major backlash against the earlier photographic movement off / 64 was formed , Edward Weston went to a meeting of the John Reed Club , which was founded to support Marxist artists and writers .
0 Bessarabia eventually became under the control of which country?
```
***注意:数据集的第一列必须为header,即标注每一列的列名***
该reader额外包含以下配置字段
```yaml
n_classes(REQUIRED): int类型。分类任务的类别数。
```
reader的输出(生成器每次yield出的数据)包含以下字段
```yaml
token_ids: 一个shape为[batch_size, seq_len]的矩阵,每行是一条样本,其中的每个元素为文本中的每个token对应的单词id。
position_ids: 一个shape为[batch_size, seq_len]的矩阵,每行是一条样本,其中的每个元素为文本中的每个token对应的位置id。
segment_ids: 一个shape为[batch_size, seq_len]的全0矩阵,用于支持BERT、ERNIE等模型的输入。
input_mask: 一个shape为[batch_size, seq_len]的矩阵,其中的每个元素为0或1,表示该位置是否是padding词(为1时代表是真实词,为0时代表是填充词)。
label_ids: 一个shape为[batch_size]的矩阵,其中的每个元素为该样本的类别标签。
task_ids: 一个shape为[batch_size, seq_len]的全0矩阵,用于支持ERNIE模型的输入。
```
当处于预测阶段时,reader所yield出的数据不会包含`label_ids`字段。
#### 文本匹配数据集reader工具:match
该reader完成文本匹配数据集的载入与处理,reader接受[tsv格式](https://en.wikipedia.org/wiki/Tab-separated_values)的数据集输入,数据集应该包含三列,一列为样本标签`label`,其余两列分别为待匹配的文本`text_a`和文本`text_b`。数据集范例可参考`data/match4mrqa`中的数据集文件,格式形如
```yaml
label text_a text_b
1 From what work of Durkheim's was interaction ritual theory derived? **[TAB]** Subsequent to these developments, Randall Collins (2004) formulated his interaction ritual theory by drawing on Durkheim's work on totemic rituals that was extended by Goffman (1964/2013; 1967) into everyday focused encounters. Based on interaction ritual theory, we experience different levels
0 where is port au prince located in haiti **[TAB]** Its population is difficult to ascertain due to the rapid growth of slums in the hillsides
0 What is the world’s first-ever pilsner type blond lager, the company also awarded the Master Homebrewer Competition held in San Francisco to an award-winning brewer who won the prestigious American Homebrewers Associations' Homebrewer of the Year award in 2013? **[TAB]** of the Year award in 2013, becoming the first woman in thirty years, and the first African American person ever to ever win the award.
1 What has Pakistan told phone companies? **[TAB]** Islamabad, Pakistan (CNN) -- Under heavy criticism for a telling cell phone carriers to ban certain words in text messages, the Pakistan Telecommunication Authority went into damage control mode Wednesday.
```
***注意:数据集的第一列必须为header,即标注每一列的列名***
reader的输出(生成器每次yield出的数据)包含以下字段:
```yaml
token_ids: 一个shape为[batch_size, seq_len]的矩阵,每行是一条样本(文本对),其中的每个元素为文本对中的每个token对应的单词id,文本对使用`[SEP]`所对应的id隔开。
position_ids: 一个shape为[batch_size, seq_len]的矩阵,每行是一条样本,其中的每个元素为文本中的每个token对应的位置id。
segment_ids: 一个shape为[batch_size, seq_len]的矩阵,在文本1的token位置,元素取值为0;在文本2的token位置,元素取值为1。用于支持BERT、ERNIE等模型的输入。
input_mask: 一个shape为[batch_size, seq_len]的矩阵,其中的每个元素为0或1,表示该位置是否是padding词(为1时代表是真实词,为0时代表是填充词)。
label_ids: 一个shape为[batch_size]的矩阵,其中的每个元素为该样本的类别标签,为0时表示两段文本不匹配,为1时代表构成匹配。
task_ids: 一个shape为[batch_size, seq_len]的全0矩阵,用于支持ERNIE模型的输入。
```
当处于预测阶段时,reader所yield出的数据不会包含`label_ids`字段。
#### 机器阅读理解数据集reader工具:mrc
该reader支持基于滑动窗口的机器阅读理解数据集载入,可以自动将较长的context按照步长切分成若干子文档,每个子文档与question分别计算答案片段,并在最终阶段合并。该reader接受[json格式]()的数据集。数据集范例可参考`data/mrqa`中的数据集文件,格式如下。
```json
{
"version": "1.0",
"data": [
{"title": "...",
"paragraphs": [
{"context": "...",
"qas": [
{"question": "..."
"id": "..."
"answers": [
{"text": "...",
"answer_start": ...}
{...}
...
]
}
{...}
...
{...},
...
]
}
{...}
...
]
}
```
数据集的最外层数据结构为字典,包含数据集版本号`version`和数据集`data`。在`data`字段内为各个样本,每个样本包含文章标题`title`和若干段落`paragraphs`,在`paragraphs`中的每个元素为一个段落`context`,基于该段落的内容,可以包含若干个问题和对应的答案`qas`,答案均位于该段落内。对于`qas`中的每个元素,包含一个问题`question`和一个全局唯一的标识`id`,以及(若干)答案`answers`。答案中的每个元素包含答案本身`text`及其在`context`中的起始位置`answer_start`。注意起始位置为字符级。此外,在测试集中,`qas`可以不包含`answers`字段。
该reader包含如下额外的可配置字段:
```yaml
doc_stride (REQUIRED): int类型。对context应用滑动窗口时的滑动步长。
max_query_len (REQUIRED): int类型。query的最大长度。
max_answer_len (REQUIRED): int类型。预测阶段answer的最大长度,不训练时该字段可为空。
n_best_size (OPTIONAL): int类型。预测阶段合并滑动窗口的样本时,每个样本所取的n_best列表大小。
```
reader的输出(生成器每次yield出的数据)包含以下字段:
```yaml
token_ids: 一个shape为[batch_size, seq_len]的矩阵,每行是一条样本(文本对),文本1为context,文本2为question,其中的每个元素为文本对中的每个token对应的单词id,文本对使用`[SEP]`所对应的id隔开。
position_ids: 一个shape为[batch_size, seq_len]的矩阵,每行是一条样本,其中的每个元素为文本中的每个token对应的位置id。
segment_ids: 一个shape为[batch_size, seq_len]的矩阵,在文本1的token位置,元素取值为0;在文本2的token位置,元素取值为1。用于支持BERT、ERNIE等模型的输入。
input_mask: 一个shape为[batch_size, seq_len]的矩阵,其中的每个元素为0或1,表示该位置是否是padding词(为1时代表是真实词,为0时代表是填充词)。
task_ids: 一个shape为[batch_size, seq_len]的全0矩阵,用于支持ERNIE模型的输入。
start_positions: 一个shape为[batch_size]的向量,每个元素代表当前样本的答案片段的起始位置。
end_positions: 一个shape为[batch_size]的向量,每个元素代表当前样本的答案片段的结束位置。
```
当处于预测阶段时,reader所yield出的数据不会包含`label_ids`字段,但会额外的包含`unique_ids`字段:
```yaml
unique_ids: 一个shape为[batch_size, seq_len]的矩阵,代表每个样本的全局唯一的id,用于预测后对滑动窗口的结果进行合并。
```
#### 掩码语言模型数据集reader工具:mlm
该reader完成掩码语言模型数据集的载入与处理,reader接受[tsv格式](https://en.wikipedia.org/wiki/Tab-separated_values)的数据集输入,MLM任务为自监督任务,数据集仅包含一列`text_a`,reader会自动为每个样本生成随机的训练标签。格式如下
```
text_a
Subsequent to these developments, Randall Collins (2004) formulated his interaction ritual theory by drawing on Durkheim's work on totemic rituals that was extended by Goffman (1964/2013; 1967) into everyday focused encounters.
Presidential spokesman Abigail Valte earlier Saturday urged residents of low-lying and mountainous areas that could be hit hard by the storm to evacuate, the state news agency said, citing an interview conducted on a government radio station. World Vision, the Christian humanitarian organization, said Saturday that it had to postpone some of its relief efforts due to Nalgae, with two of three emergency teams set to deploy once the storm passes. Another team is in Bulcan province, most of which is "still submerged" because of Nesat. The group is focusing its post-Nesat efforts on two communities in Manila and three in the northern Isabela and Zambales provinces.
of the Year award in 2013, becoming the first woman in thirty years, and the first African American person ever to ever win the award. After an extensive career with the California State Legislature she began working for PicoBrew, a product development company in Seattle, WA that specializes in automated brewing equipment.
the gakkel ridge is a boundary between which two tectonic plates Mid-Atlantic Ridge ( MAR ) is a mid-ocean ridge , a divergent tectonic plate or constructive plate boundary located along the floor of the Atlantic Ocean , and part of the longest mountain range in the world . The ridge extends from a junction with the Gakkel Ridge ( Mid-Arctic Ridge ) northeast of Greenland southward to the Bouvet Triple Junction in the South Atlantic .
```
***注意:数据集的第一列必须为header,即标注每一列的列名***
reader的输出(生成器每次yield出的数据)包含以下对象:
```yaml
token_ids: 一个shape为[batch_size, seq_len]的矩阵,每行是一条样本,其中的每个元素为文本中的每个token对应的单词id。
position_ids: 一个shape为[batch_size, seq_len]的矩阵,每行是一条样本,其中的每个元素为文本中的每个token对应的位置id。
segment_ids: 一个shape为[batch_size, seq_len]的全0矩阵,用于支持BERT、ERNIE等模型的输入。
input_mask: 一个shape为[batch_size, seq_len]的矩阵,其中的每个元素为0或1,表示该位置是否是padding词(为1时代表是真实词,为0时代表是填充词)。
mask_label: 一个shape为[None]的向量,其中的每个元素为被mask掉的单词的真实单词id。
mask_pos: 一个shape为[None]的向量,长度与`mask_pos`一致且元素一一对应。每个元素表示被mask掉的单词的位置。
task_ids: 一个shape为[batch_size, seq_len]的全0矩阵,用于支持ERNIE模型的输入。
```
## 附录B:内置主干网络(backbone)
框架中内置了BERT和ERNIE作为主干网络,未来框架会引入更多的骨干网络如XLNet等。
#### BERT
BERT包含了如下输入对象
```yaml
token_ids: 一个shape为[batch_size, seq_len]的矩阵,每行是一条样本,其中的每个元素为文本中的每个token对应的单词id。
position_ids: 一个shape为[batch_size, seq_len]的矩阵,每行是一条样本,其中的每个元素为文本中的每个token对应的位置id。
segment_ids: 一个shape为[batch_size, seq_len]的0/1矩阵,用于支持BERT、ERNIE等模型的输入,当元素为0时,代表当前token属于分类任务或匹配任务的text1,为1时代表当前token属于匹配任务的text2.
input_mask: 一个shape为[batch_size, seq_len]的矩阵,其中的每个元素为0或1,表示该位置是否是padding词(为1时代表是真实词,为0时代表是填充词)。
```
提供了如下输出对象供下游组件使用。
```yaml
word_embedding: 一个shape为[batch_size, seq_len, emb_size]的张量(Tensor),float32类型。表示当前batch中各个样本的(上下文无关)词向量序列。
embedding_table: 一个shape为[vocab_size, emb_size]的矩阵,float32类型。表示BERT当前维护的词向量查找表矩阵。
encoder_outputs: 一个shape为[batch_size, seq_len, hidden_size]的Tensor, float32类型。表示BERT encoder对当前batch中各个样本的encoding结果。
sentence_embedding: 一个shape为[batch_size, hidden_size]的matrix, float32类型。每一行代表BERT encoder对当前batch中相应样本的句子向量(sentence embedding)
sentence_pair_embedding: 一个shape为[batch_size, hidden_size]的matrix, float32类型。每一行代表BERT encoder对当前batch中相应样本的句子向量(sentence embedding)
```
#### ERNIE
ERNIE包含了如下输入对象
```yaml
token_ids: 。一个shape为[batch_size, seq_len]的矩阵,每行是一条样本,其中的每个元素为文本中的每个token对应的单词id。
position_ids: 一个shape为[batch_size, seq_len]的矩阵,每行是一条样本,其中的每个元素为文本中的每个token对应的位置id。
segment_ids: 一个shape为[batch_size, seq_len]的0/1矩阵,用于支持BERT、ERNIE等模型的输入,当元素为0时,代表当前token属于分类任务或匹配任务的text1,为1时代表当前token属于匹配任务的text2.
input_mask: 一个shape为[batch_size, seq_len]的矩阵,其中的每个元素为0或1,表示该位置是否是padding词(为1时代表是真实词,为0时代表是填充词)。
segment_ids: 一个shape为[batch_size, seq_len]的全0矩阵,用于支持BERT、ERNIE等模型的输入。
task_ids: 一个shape为[batch_size, seq_len]的全0矩阵,用于支持ERNIE finetuning。
```
提供了如下输出对象供下游组件使用。
```yaml
word_embedding: 一个shape为[batch_size, seq_len, emb_size]的张量(Tensor),float32类型。表示当前batch中各个样本的(上下文无关)词向量序列。
embedding_table: 一个shape为[vocab_size, emb_size]的矩阵,float32类型。表示BERT当前维护的词向量查找表矩阵。
encoder_outputs: 一个shape为[batch_size, seq_len, hidden_size]的Tensor, float32类型。表示BERT encoder对当前batch中各个样本的encoding结果。
sentence_embedding: 一个shape为[batch_size, hidden_size]的matrix, float32类型。每一行代表BERT encoder对当前batch中相应样本的句子向量(sentence embedding)
sentence_pair_embedding: 一个shape为[batch_size, hidden_size]的matrix, float32类型。每一行代表BERT encoder对当前batch中相应样本的句子向量(sentence embedding)
```
## 附录C:内置任务范式(paradigm)
#### 分类范式:cls
分类范式额外包含以下配置字段:
```yaml
n_classes(REQUIRED): int类型。分类任务的类别数。
pred_output_path (OPTIONAL) : str类型。预测输出结果的保存路径,当该参数未空时,保存至全局配置文件中的`save_path`字段指定路径下的任务目录。
```
分类范式包含如下的输入对象:
训练阶段:
```yaml
sentence_embedding: 一个shape为[batch_size, hidden_size]的matrix, float32类型。每一行代表BERT encoder对当前batch中相应样本的句子向量(sentence embedding)
label_ids: 一个shape为[batch_size]的矩阵,其中的每个元素为该样本的类别标签。
```
预测阶段:
```yaml
sentence_embedding: 一个shape为[batch_size, hidden_size]的matrix, float32类型。每一行代表BERT encoder对当前batch中相应样本的句子向量(sentence embedding)
```
在训练阶段,输出loss;预测阶段输出各个类别的预测概率。
#### 匹配范式:match
匹配范式额外包含以下配置字段:
```yaml
pred_output_path (OPTIONAL) : str类型。预测输出结果的保存路径,当该参数未空时,保存至全局配置文件中的`save_path`字段指定路径下的任务目录。
```
匹配范式包含如下的输入对象:
训练阶段:
```yaml
sentence_pair_embedding: 一个shape为[batch_size, hidden_size]的matrix, float32类型。每一行代表BERT encoder对当前batch中相应样本的句子向量(sentence embedding)
label_ids: 一个shape为[batch_size]的矩阵,其中的每个元素为该样本的类别标签,为0时表示两段文本不匹配,为1时代表构成匹配
```
预测阶段:
```yaml
sentence_pair_embedding: 一个shape为[batch_size, hidden_size]的matrix, float32类型。每一行代表BERT encoder对当前batch中相应样本的句子向量(sentence embedding)
```
在训练阶段,输出loss;预测阶段输出匹配与否的概率分布。
#### 机器阅读理解范式:mrc
分类范式额外包含以下配置字段:
```yaml
max_answer_len(REQUIRED): int类型。预测的最大答案长度
n_best_size (OPTIONAL) : int类型,默认为20。预测时保存的nbest回答文件中每条样本的n_best数量
pred_output_path (OPTIONAL) : str类型。预测输出结果的保存路径,当该参数未空时,保存至全局配置文件中的`save_path`字段指定路径下的任务目录
```
机器阅读理解范式包含如下的输入对象:
训练阶段:
```yaml
encoder_outputs: 一个shape为[batch_size, seq_len, hidden_size]的Tensor, float32类型。表示BERT encoder对当前batch中各个样本的encoding结果。
start_positions: 一个shape为[batch_size]的向量,每个元素代表当前样本的答案片段的起始位置。
end_positions: 一个shape为[batch_size]的向量,每个元素代表当前样本的答案片段的结束位置。
```
预测阶段:
```yaml
encoder_outputs: 一个shape为[batch_size, seq_len, hidden_size]的Tensor, float32类型。表示BERT encoder对当前batch中各个样本的encoding结果。
unique_ids: 一个shape为[batch_size, seq_len]的矩阵,代表每个样本的全局唯一的id,用于预测后对滑动窗口的结果进行合并。
```
#### 掩码语言模型范式:mlm
该任务范式为无监督任务范式,不支持预测,仅用于(辅助)训练。包含如下的输入对象:
```yaml
mask_label: 一个shape为[None]的向量,其中的每个元素为被mask掉的单词的真实单词id。
mask_pos": 一个shape为[None]的向量,长度与`mask_pos`一致且元素一一对应。每个元素表示被mask掉的单词的位置。
embedding_table: 一个shape为[vocab_size, emb_size]的矩阵,float32类型。表示BERT当前维护的词向量查找表矩阵。
encoder_outputs: 一个shape为[batch_size, seq_len, hidden_size]的Tensor, float32类型。表示BERT encoder对当前batch中各个样本的encoding结果。
```
## 附录D:可配置的全局参数列表
```yaml
task_instance(REQUIRED): str类型。需要进行训练或预测的任务实例名。在多任务模式下,多个任务之间使用逗号`,`隔开。名称选取自任务实例配置文件的文件名(不包含后缀.yaml)。
mix_ratio (OPTIONAL): str类型。每个任务的训练阶段的采样概率,各个值通过逗号`,`隔开,且与task_instance中的元素一一对应。默认每个任务的采样概率均为1.0,即所有任务等概率采样(代表与主任务采样次数的期望相同)。详情见 《进阶篇-训练终止条件与预期训练步数》。
target_tag (OPTIONAL): str类型。目标/辅助任务标志位,各个值通过逗号`,`隔开,且与task_instance中的元素一一对应。标记为1的任务代表目标任务,标记为0的任务代表辅助任务。默认每个值均为1(即默认每个任务为目标任务)。相关使用示例见DEMO2。
task_reuse_tag (OPTIONAL): str类型。任务层复用标志位,各个值通过逗号`,`隔开,且与task_instance中的元素一一对应。元素取值相同的任务会自动共享任务层参数,取值不同的任务不复用任务层参数。相关使用示例见DEMO3。
backbone(REQUIRED): str类型。主干网络名。
backbone_config_path (OPTIONAL): str类型。主干网络配置文件路径。
save_path(REQUIRED): str类型。checkpoint文件和各个目标任务的预测模型保存路径。
vocab_path(REQUIRED): str类型。字典文件,纯文本格式存储,其中每行为一个单词,供reader、backbone和各个任务使用。
do_lower_case (OPTIONAL): bool类型。大小写标志位。默认为False,即区分大小写。
for_cn: bool类型。中文模式标志位。默认为False,即默认输入为英文,设置为True后,分词器、后处理等按照中文语言进行处理。
print_every_n_steps (OPTIONAL): int类型。默认为5。训练阶段打印日志的频率(step为单位)。
save_every_n_steps (OPTIONAL): int类型。默认为-1。训练过程中保存checkpoint模型的频率,默认不保存。
optimizer(REQUIRED): str类型。优化器名称,目前框架只支持adam,未来会支持更多优化器。
learning_rate(REQUIRED): str类型。训练阶段的学习率。
batch_size(REQUIRED): int类型。批大小,即每个训练或推理step所使用样本数。
epoch(REQUIRED): int类型。主任务的训练epoch数。
use_gpu (OPTIONAL): bool类型。默认为True。框架默认使用GPU进行单机单卡或分布式训练,若希望使用cpu训练或推理,可将该标志位置为False。
warmup_proportion (OPTIONAL): float类型。默认为0。对预训练模型finetuning时的warmup的训练step占预估的全部训练步数的比例。
use_ema (OPTIONAL): bool类型。默认为False。是否开启[ema](https://en.wikipedia.org/wiki/Moving_average#Exponential_moving_average) 进行训练和推理。
ema_decay (OPTIONAL): float类型。默认为0。开启ema时的权重衰减指数。
random_seed (OPTIONAL): int类型。随机种子,默认1。
```
## License
This tutorial is contributed by [PaddlePaddle](https://github.com/PaddlePaddle/Paddle) and licensed under the [Apache-2.0 license](https://github.com/PaddlePaddle/models/blob/develop/LICENSE).
## 许可证书
此向导由[PaddlePaddle](https://github.com/PaddlePaddle/Paddle)贡献,受[Apache-2.0 license](https://github.com/PaddlePaddle/models/blob/develop/LICENSE)许可认证。
# -*- coding: UTF-8 -*-
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""v1.1
BERT model."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from paddle import fluid
from paddle.fluid import layers
from paddlepalm.backbone.utils.transformer import pre_process_layer, encoder
from paddlepalm.interface import backbone
class Model(backbone):
def __init__(self, config, phase):
# self._is_training = phase == 'train' # backbone一般不用关心运行阶段,因为outputs在任何阶段基本不会变
self._emb_size = config["hidden_size"]
self._n_layer = config["num_hidden_layers"]
self._n_head = config["num_attention_heads"]
self._voc_size = config["vocab_size"]
self._max_position_seq_len = config["max_position_embeddings"]
self._sent_types = config["type_vocab_size"]
self._hidden_act = config["hidden_act"]
self._prepostprocess_dropout = config["hidden_dropout_prob"]
self._attention_dropout = config["attention_probs_dropout_prob"]
self._word_emb_name = "word_embedding"
self._pos_emb_name = "pos_embedding"
self._sent_emb_name = "sent_embedding"
# Initialize all weigths by truncated normal initializer, and all biases
# will be initialized by constant zero by default.
self._param_initializer = fluid.initializer.TruncatedNormal(
scale=config["initializer_range"])
@property
def inputs_attr(self):
return {"token_ids": [[-1, -1, 1], 'int64'],
"position_ids": [[-1, -1, 1], 'int64'],
"segment_ids": [[-1, -1, 1], 'int64'],
"input_mask": [[-1, -1, 1], 'float32']}
@property
def outputs_attr(self):
return {"word_embedding": [[-1, -1, self._emb_size], 'float32'],
"embedding_table": [[-1, self._voc_size, self._emb_size], 'float32'],
"encoder_outputs": [[-1, -1, self._emb_size], 'float32'],
"sentence_embedding": [[-1, self._emb_size], 'float32'],
"sentence_pair_embedding": [[-1, self._emb_size], 'float32']}
def build(self, inputs, scope_name=""):
src_ids = inputs['token_ids']
pos_ids = inputs['position_ids']
sent_ids = inputs['segment_ids']
input_mask = inputs['input_mask']
self._emb_dtype = 'float32'
# padding id in vocabulary must be set to 0
emb_out = fluid.layers.embedding(
input=src_ids,
size=[self._voc_size, self._emb_size],
dtype=self._emb_dtype,
param_attr=fluid.ParamAttr(
name=scope_name+self._word_emb_name, initializer=self._param_initializer),
is_sparse=False)
# fluid.global_scope().find_var('backbone-word_embedding').get_tensor()
embedding_table = fluid.default_main_program().global_block().var(scope_name+self._word_emb_name)
position_emb_out = fluid.layers.embedding(
input=pos_ids,
size=[self._max_position_seq_len, self._emb_size],
dtype=self._emb_dtype,
param_attr=fluid.ParamAttr(
name=scope_name+self._pos_emb_name, initializer=self._param_initializer))
sent_emb_out = fluid.layers.embedding(
sent_ids,
size=[self._sent_types, self._emb_size],
dtype=self._emb_dtype,
param_attr=fluid.ParamAttr(
name=scope_name+self._sent_emb_name, initializer=self._param_initializer))
emb_out = emb_out + position_emb_out
emb_out = emb_out + sent_emb_out
emb_out = pre_process_layer(
emb_out, 'nd', self._prepostprocess_dropout, name=scope_name+'pre_encoder')
self_attn_mask = fluid.layers.matmul(
x=input_mask, y=input_mask, transpose_y=True)
self_attn_mask = fluid.layers.scale(
x=self_attn_mask, scale=10000.0, bias=-1.0, bias_after_scale=False)
n_head_self_attn_mask = fluid.layers.stack(
x=[self_attn_mask] * self._n_head, axis=1)
n_head_self_attn_mask.stop_gradient = True
enc_out = encoder(
enc_input=emb_out,
attn_bias=n_head_self_attn_mask,
n_layer=self._n_layer,
n_head=self._n_head,
d_key=self._emb_size // self._n_head,
d_value=self._emb_size // self._n_head,
d_model=self._emb_size,
d_inner_hid=self._emb_size * 4,
prepostprocess_dropout=self._prepostprocess_dropout,
attention_dropout=self._attention_dropout,
relu_dropout=0,
hidden_act=self._hidden_act,
preprocess_cmd="",
postprocess_cmd="dan",
param_initializer=self._param_initializer,
name=scope_name+'encoder')
next_sent_feat = fluid.layers.slice(
input=enc_out, axes=[1], starts=[0], ends=[1])
next_sent_feat = fluid.layers.reshape(next_sent_feat, [-1, next_sent_feat.shape[-1]])
next_sent_feat = fluid.layers.fc(
input=next_sent_feat,
size=self._emb_size,
act="tanh",
param_attr=fluid.ParamAttr(
name=scope_name+"pooled_fc.w_0", initializer=self._param_initializer),
bias_attr=scope_name+"pooled_fc.b_0")
return {'embedding_table': embedding_table,
'word_embedding': emb_out,
'encoder_outputs': enc_out,
'sentence_embedding': next_sent_feat,
'sentence_pair_embedding': next_sent_feat}
def postprocess(self, rt_outputs):
pass
# -*- coding: UTF-8 -*-
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Ernie model."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from __future__ import absolute_import
from paddle import fluid
from paddle.fluid import layers
from paddlepalm.backbone.utils.transformer import pre_process_layer, encoder
from paddlepalm.interface import backbone
class Model(backbone):
def __init__(self,
config,
phase):
# self._is_training = phase == 'train' # backbone一般不用关心运行阶段,因为outputs在任何阶段基本不会变
self._emb_size = config['hidden_size']
self._n_layer = config['num_hidden_layers']
self._n_head = config['num_attention_heads']
self._voc_size = config['vocab_size']
self._max_position_seq_len = config['max_position_embeddings']
if config['sent_type_vocab_size']:
self._sent_types = config['sent_type_vocab_size']
else:
self._sent_types = config['type_vocab_size']
self._task_types = config['task_type_vocab_size']
self._hidden_act = config['hidden_act']
self._prepostprocess_dropout = config['hidden_dropout_prob']
self._attention_dropout = config['attention_probs_dropout_prob']
self._word_emb_name = "word_embedding"
self._pos_emb_name = "pos_embedding"
self._sent_emb_name = "sent_embedding"
self._task_emb_name = "task_embedding"
self._emb_dtype = "float32"
self._param_initializer = fluid.initializer.TruncatedNormal(
scale=config['initializer_range'])
@property
def inputs_attr(self):
return {"token_ids": [[-1, -1, 1], 'int64'],
"position_ids": [[-1, -1, 1], 'int64'],
"segment_ids": [[-1, -1, 1], 'int64'],
"input_mask": [[-1, -1, 1], 'float32'],
"task_ids": [[-1,-1, 1], 'int64']}
@property
def outputs_attr(self):
return {"word_embedding": [[-1, -1, self._emb_size], 'float32'],
"embedding_table": [[-1, self._voc_size, self._emb_size], 'float32'],
"encoder_outputs": [[-1, -1, self._emb_size], 'float32'],
"sentence_embedding": [[-1, self._emb_size], 'float32'],
"sentence_pair_embedding": [[-1, self._emb_size], 'float32']}
def build(self, inputs, scope_name=""):
src_ids = inputs['token_ids']
pos_ids = inputs['position_ids']
sent_ids = inputs['segment_ids']
input_mask = inputs['input_mask']
task_ids = inputs['task_ids']
# padding id in vocabulary must be set to 0
emb_out = fluid.layers.embedding(
input=src_ids,
size=[self._voc_size, self._emb_size],
dtype=self._emb_dtype,
param_attr=fluid.ParamAttr(
name=scope_name+self._word_emb_name, initializer=self._param_initializer),
is_sparse=False)
# fluid.global_scope().find_var('backbone-word_embedding').get_tensor()
embedding_table = fluid.default_main_program().global_block().var(scope_name+self._word_emb_name)
position_emb_out = fluid.layers.embedding(
input=pos_ids,
size=[self._max_position_seq_len, self._emb_size],
dtype=self._emb_dtype,
param_attr=fluid.ParamAttr(
name=scope_name+self._pos_emb_name, initializer=self._param_initializer))
sent_emb_out = fluid.layers.embedding(
sent_ids,
size=[self._sent_types, self._emb_size],
dtype=self._emb_dtype,
param_attr=fluid.ParamAttr(
name=scope_name+self._sent_emb_name, initializer=self._param_initializer))
emb_out = emb_out + position_emb_out
emb_out = emb_out + sent_emb_out
task_emb_out = fluid.layers.embedding(
task_ids,
size=[self._task_types, self._emb_size],
dtype=self._emb_dtype,
param_attr=fluid.ParamAttr(
name=scope_name+self._task_emb_name,
initializer=self._param_initializer))
emb_out = emb_out + task_emb_out
emb_out = pre_process_layer(
emb_out, 'nd', self._prepostprocess_dropout, name=scope_name+'pre_encoder')
self_attn_mask = fluid.layers.matmul(
x=input_mask, y=input_mask, transpose_y=True)
self_attn_mask = fluid.layers.scale(
x=self_attn_mask, scale=10000.0, bias=-1.0, bias_after_scale=False)
n_head_self_attn_mask = fluid.layers.stack(
x=[self_attn_mask] * self._n_head, axis=1)
n_head_self_attn_mask.stop_gradient = True
enc_out = encoder(
enc_input=emb_out,
attn_bias=n_head_self_attn_mask,
n_layer=self._n_layer,
n_head=self._n_head,
d_key=self._emb_size // self._n_head,
d_value=self._emb_size // self._n_head,
d_model=self._emb_size,
d_inner_hid=self._emb_size * 4,
prepostprocess_dropout=self._prepostprocess_dropout,
attention_dropout=self._attention_dropout,
relu_dropout=0,
hidden_act=self._hidden_act,
preprocess_cmd="",
postprocess_cmd="dan",
param_initializer=self._param_initializer,
name=scope_name+'encoder')
next_sent_feat = fluid.layers.slice(
input=enc_out, axes=[1], starts=[0], ends=[1])
next_sent_feat = fluid.layers.reshape(next_sent_feat, [-1, next_sent_feat.shape[-1]])
next_sent_feat = fluid.layers.fc(
input=next_sent_feat,
size=self._emb_size,
act="tanh",
param_attr=fluid.ParamAttr(
name=scope_name+"pooled_fc.w_0", initializer=self._param_initializer),
bias_attr=scope_name+"pooled_fc.b_0")
return {'embedding_table': embedding_table,
'word_embedding': emb_out,
'encoder_outputs': enc_out,
'sentence_embedding': next_sent_feat,
'sentence_pair_embedding': next_sent_feat}
def postprocess(self, rt_outputs):
pass
# -*- coding: UTF-8 -*-
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Transformer encoder."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from functools import partial
import paddle.fluid as fluid
import paddle.fluid.layers as layers
from paddle.fluid.layer_helper import LayerHelper as LayerHelper
from functools import reduce # py3
def layer_norm(x, begin_norm_axis=1, epsilon=1e-6, param_attr=None, bias_attr=None):
helper = LayerHelper('layer_norm', **locals())
mean = layers.reduce_mean(x, dim=begin_norm_axis, keep_dim=True)
shift_x = layers.elementwise_sub(x=x, y=mean, axis=0)
variance = layers.reduce_mean(layers.square(shift_x), dim=begin_norm_axis, keep_dim=True)
r_stdev = layers.rsqrt(variance + epsilon)
norm_x = layers.elementwise_mul(x=shift_x, y=r_stdev, axis=0)
param_shape = [reduce(lambda x, y: x * y, norm_x.shape[begin_norm_axis:])]
param_dtype = norm_x.dtype
scale = helper.create_parameter(
attr=param_attr,
shape=param_shape,
dtype=param_dtype,
default_initializer=fluid.initializer.Constant(1.))
bias = helper.create_parameter(
attr=bias_attr,
shape=param_shape,
dtype=param_dtype,
is_bias=True,
default_initializer=fluid.initializer.Constant(0.))
out = layers.elementwise_mul(x=norm_x, y=scale, axis=-1)
out = layers.elementwise_add(x=out, y=bias, axis=-1)
return out
def multi_head_attention(queries,
keys,
values,
attn_bias,
d_key,
d_value,
d_model,
n_head=1,
dropout_rate=0.,
cache=None,
param_initializer=None,
name='multi_head_att'):
"""
Multi-Head Attention. Note that attn_bias is added to the logit before
computing softmax activiation to mask certain selected positions so that
they will not considered in attention weights.
"""
keys = queries if keys is None else keys
values = keys if values is None else values
if not (len(queries.shape) == len(keys.shape) == len(values.shape) == 3):
raise ValueError(
"Inputs: quries, keys and values should all be 3-D tensors.")
def __compute_qkv(queries, keys, values, n_head, d_key, d_value):
"""
Add linear projection to queries, keys, and values.
"""
q = layers.fc(input=queries,
size=d_key * n_head,
num_flatten_dims=2,
param_attr=fluid.ParamAttr(
name=name + '_query_fc.w_0',
initializer=param_initializer),
bias_attr=name + '_query_fc.b_0')
k = layers.fc(input=keys,
size=d_key * n_head,
num_flatten_dims=2,
param_attr=fluid.ParamAttr(
name=name + '_key_fc.w_0',
initializer=param_initializer),
bias_attr=name + '_key_fc.b_0')
v = layers.fc(input=values,
size=d_value * n_head,
num_flatten_dims=2,
param_attr=fluid.ParamAttr(
name=name + '_value_fc.w_0',
initializer=param_initializer),
bias_attr=name + '_value_fc.b_0')
return q, k, v
def __split_heads(x, n_head):
"""
Reshape the last dimension of inpunt tensor x so that it becomes two
dimensions and then transpose. Specifically, input a tensor with shape
[bs, max_sequence_length, n_head * hidden_dim] then output a tensor
with shape [bs, n_head, max_sequence_length, hidden_dim].
"""
hidden_size = x.shape[-1]
# The value 0 in shape attr means copying the corresponding dimension
# size of the input as the output dimension size.
reshaped = layers.reshape(
x=x, shape=[0, 0, n_head, hidden_size // n_head], inplace=True)
# permuate the dimensions into:
# [batch_size, n_head, max_sequence_len, hidden_size_per_head]
return layers.transpose(x=reshaped, perm=[0, 2, 1, 3])
def __combine_heads(x):
"""
Transpose and then reshape the last two dimensions of inpunt tensor x
so that it becomes one dimension, which is reverse to __split_heads.
"""
if len(x.shape) == 3: return x
if len(x.shape) != 4:
raise ValueError("Input(x) should be a 4-D Tensor.")
trans_x = layers.transpose(x, perm=[0, 2, 1, 3])
# The value 0 in shape attr means copying the corresponding dimension
# size of the input as the output dimension size.
return layers.reshape(
x=trans_x,
shape=[0, 0, trans_x.shape[2] * trans_x.shape[3]],
inplace=True)
def scaled_dot_product_attention(q, k, v, attn_bias, d_key, dropout_rate):
"""
Scaled Dot-Product Attention
"""
scaled_q = layers.scale(x=q, scale=d_key**-0.5)
product = layers.matmul(x=scaled_q, y=k, transpose_y=True)
if attn_bias:
product += attn_bias
weights = layers.softmax(product)
if dropout_rate:
weights = layers.dropout(
weights,
dropout_prob=dropout_rate,
dropout_implementation="upscale_in_train",
is_test=False)
out = layers.matmul(weights, v)
return out
q, k, v = __compute_qkv(queries, keys, values, n_head, d_key, d_value)
if cache is not None: # use cache and concat time steps
# Since the inplace reshape in __split_heads changes the shape of k and
# v, which is the cache input for next time step, reshape the cache
# input from the previous time step first.
k = cache["k"] = layers.concat(
[layers.reshape(
cache["k"], shape=[0, 0, d_model]), k], axis=1)
v = cache["v"] = layers.concat(
[layers.reshape(
cache["v"], shape=[0, 0, d_model]), v], axis=1)
q = __split_heads(q, n_head)
k = __split_heads(k, n_head)
v = __split_heads(v, n_head)
ctx_multiheads = scaled_dot_product_attention(q, k, v, attn_bias, d_key,
dropout_rate)
out = __combine_heads(ctx_multiheads)
# Project back to the model size.
proj_out = layers.fc(input=out,
size=d_model,
num_flatten_dims=2,
param_attr=fluid.ParamAttr(
name=name + '_output_fc.w_0',
initializer=param_initializer),
bias_attr=name + '_output_fc.b_0')
return proj_out
def positionwise_feed_forward(x,
d_inner_hid,
d_hid,
dropout_rate,
hidden_act,
param_initializer=None,
name='ffn'):
"""
Position-wise Feed-Forward Networks.
This module consists of two linear transformations with a ReLU activation
in between, which is applied to each position separately and identically.
"""
hidden = layers.fc(input=x,
size=d_inner_hid,
num_flatten_dims=2,
act=hidden_act,
param_attr=fluid.ParamAttr(
name=name + '_fc_0.w_0',
initializer=param_initializer),
bias_attr=name + '_fc_0.b_0')
if dropout_rate:
hidden = layers.dropout(
hidden,
dropout_prob=dropout_rate,
dropout_implementation="upscale_in_train",
is_test=False)
out = layers.fc(input=hidden,
size=d_hid,
num_flatten_dims=2,
param_attr=fluid.ParamAttr(
name=name + '_fc_1.w_0', initializer=param_initializer),
bias_attr=name + '_fc_1.b_0')
return out
def pre_post_process_layer(prev_out, out, process_cmd, dropout_rate=0.,
name=''):
"""
Add residual connection, layer normalization and droput to the out tensor
optionally according to the value of process_cmd.
This will be used before or after multi-head attention and position-wise
feed-forward networks.
"""
for cmd in process_cmd:
if cmd == "a": # add residual connection
out = out + prev_out if prev_out else out
elif cmd == "n": # add layer normalization
out_dtype = out.dtype
if out_dtype == fluid.core.VarDesc.VarType.FP16:
out = layers.cast(x=out, dtype="float32")
out = layer_norm(
out,
begin_norm_axis=len(out.shape) - 1,
param_attr=fluid.ParamAttr(
name=name + '_layer_norm_scale',
initializer=fluid.initializer.Constant(1.)),
bias_attr=fluid.ParamAttr(
name=name + '_layer_norm_bias',
initializer=fluid.initializer.Constant(0.)))
if out_dtype == fluid.core.VarDesc.VarType.FP16:
out = layers.cast(x=out, dtype="float16")
elif cmd == "d": # add dropout
if dropout_rate:
out = layers.dropout(
out,
dropout_prob=dropout_rate,
dropout_implementation="upscale_in_train",
is_test=False)
return out
pre_process_layer = partial(pre_post_process_layer, None)
post_process_layer = pre_post_process_layer
def encoder_layer(enc_input,
attn_bias,
n_head,
d_key,
d_value,
d_model,
d_inner_hid,
prepostprocess_dropout,
attention_dropout,
relu_dropout,
hidden_act,
preprocess_cmd="n",
postprocess_cmd="da",
param_initializer=None,
name=''):
"""The encoder layers that can be stacked to form a deep encoder.
This module consits of a multi-head (self) attention followed by
position-wise feed-forward networks and both the two components companied
with the post_process_layer to add residual connection, layer normalization
and droput.
"""
attn_output = multi_head_attention(
pre_process_layer(
enc_input,
preprocess_cmd,
prepostprocess_dropout,
name=name + '_pre_att'),
None,
None,
attn_bias,
d_key,
d_value,
d_model,
n_head,
attention_dropout,
param_initializer=param_initializer,
name=name + '_multi_head_att')
attn_output = post_process_layer(
enc_input,
attn_output,
postprocess_cmd,
prepostprocess_dropout,
name=name + '_post_att')
ffd_output = positionwise_feed_forward(
pre_process_layer(
attn_output,
preprocess_cmd,
prepostprocess_dropout,
name=name + '_pre_ffn'),
d_inner_hid,
d_model,
relu_dropout,
hidden_act,
param_initializer=param_initializer,
name=name + '_ffn')
return post_process_layer(
attn_output,
ffd_output,
postprocess_cmd,
prepostprocess_dropout,
name=name + '_post_ffn')
def encoder(enc_input,
attn_bias,
n_layer,
n_head,
d_key,
d_value,
d_model,
d_inner_hid,
prepostprocess_dropout,
attention_dropout,
relu_dropout,
hidden_act,
preprocess_cmd="n",
postprocess_cmd="da",
param_initializer=None,
name=''):
"""
The encoder is composed of a stack of identical layers returned by calling
encoder_layer.
"""
for i in range(n_layer):
enc_output = encoder_layer(
enc_input,
attn_bias,
n_head,
d_key,
d_value,
d_model,
d_inner_hid,
prepostprocess_dropout,
attention_dropout,
relu_dropout,
hidden_act,
preprocess_cmd,
postprocess_cmd,
param_initializer=param_initializer,
name=name + '_layer_' + str(i))
enc_input = enc_output
enc_output = pre_process_layer(
enc_output, preprocess_cmd, prepostprocess_dropout, name="post_encoder")
return enc_output
task_instance: "mrqa"
save_path: "output_model/firstrun"
backbone: "bert"
backbone_config_path: "../../pretrain_model/bert/bert_config.json"
batch_size: 4
num_epochs: 2
optimizer: "adam"
learning_rate: 3e-5
warmup_proportion: 0.1
weight_decay: 0.1
print_every_n_steps: 10
train_file: data/mrqa/train.json
reader: mrc
paradigm: mrc
vocab_path: "../../pretrain_model/bert/vocab.txt"
do_lower_case: True
max_seq_len: 512
doc_stride: 128
max_query_len: 64
import paddlepalm as palm
if __name__ == '__main__':
controller = palm.Controller('config.yaml')
controller.load_pretrain('../../pretrain_model/bert/params')
controller.train()
export CUDA_VISIBLE_DEVICES=0
python run.py
{
"version": "1.1",
"data": [
{
"paragraphs": [
{
"qas": [
{
"question": "Revolutionary War hero: \"His spirit is in Vermont now\"",
"id": "3f02f171c82e49828580007a71eefc31",
"answers": [
{
"text": "Ethan Allen",
"answer_start": 976
}
]
}
],
"context": "[SEP] [SEP] Roadside Historic Markers List | Historic Sites [SEP] The Vermont State Society Daughters of the American Revolution now ..... lie in \nthis cemetery beneath the marble statue, but his spirit is in Vermont now. ... Hero \nof Gettysburg In this park on July 22, 1863, Vermont's only ethnic Civil War unit... [SEP] [SEP] On April 30, 1900 this engineer gave his life in a train crash to save ... [SEP] Jun 25, 2016 ... video icon Revolutionary War hero His spirit is in Vermont now # Quiz # \nQuestion. 0:29. Amazing Videos. No views... [SEP] [SEP] Revolutionary War hero His spirit is in Vermont now ... - YouTube [SEP] Jul 23, 2016 ... Revolutionary War hero His spirit is in Vermont now # Quiz # Question. Videos \nAmazing Tube. SubscribeSubscribedUnsubscribe 00. Loading. [SEP] [SEP] I BROKED IT by outofculture Pull Request #1 petekinnecom ... [SEP] + \"question\": \"'Revolutionary War hero: \\\"His spirit is in Vermont now\\\"'\",. + \"value\": \n\"$1000\",. + \"answer\": \"Ethan Allen\". + }. + ]. + },. + {. + \"name\": \"3-LETTER... [SEP] [SEP] Jeopardy! #1 Flashcards | Quizlet [SEP] ... TRIBUTES 1000 Revolutionary War hero: \"His spirit is in Vermont now\" ..... \nJUST THE FACTS 400 This hero of several books is 11 when he discovers he's a\n... [SEP] [SEP] Vermont Historical Markers - The Historical Marker Database [SEP] General John Strong was a Revolutionary War patriot and a prominent early \ncitizen of Addison ..... Robert Cochran Revolutionary Hero Settled Here, 1769 \n.... lie in this cemetery beneath the marble statue, but his spirit is in Vermont now. [SEP] [SEP] Burial Place of General Ethan Allen Historical Marker [SEP] Jan 15, 2012 ... ... lie in this cemetery beneath the marble statue, but his spirit is in Vermont now. \n... Vermont Edition: Myth, Hero, Legend: Ethan Allen's Founding Legacy. ... \nVermont history, with stories of revolutionary campaigns and political deals. ... \nPatriots & Patriotism Settlements & Settlers War, US Revolutionary . "
}
],
"title": ""
},
{
"paragraphs": [
{
"qas": [
{
"question": "In 1963, live on \"The Art Linkletter Show\", this company served its billionth burger",
"id": "98d0b8ce19d1434abdb42aa01e83db61",
"answers": [
{
"text": "McDonald's",
"answer_start": 1307
}
]
}
],
"context": "[SEP] [SEP] Assignment 13 [SEP] Nov 30, 2014 ... ... sunshine each year ## 4 In 1963, live on \"The Art Linkletter Show\", this \ncompany served its billionth burger ## 5 Signer of the Dec. of Indep.,... [SEP] [SEP] RPubs - What Is This Obscure Question, Trebek? [SEP] Oct 12, 2016 ... ... hours of sunshine each year' ## 4 'In 1963, live on \"The Art Linkletter Show\", \nthis company served its billionth burger' ## 5 'Signer of the Dec. [SEP] [SEP] Unable to Bulk Import Free flow text MonetDB.R [SEP] I am trying to import a dataset of 217000 records (Jeopardy Dataset) into \nMonetDB through the MonetDB.R interface. The file is a CSV file. [SEP] [SEP] I BROKED IT by outofculture Pull Request #1 petekinnecom ... [SEP] + {. + \"name\": \"THE COMPANY LINE\",. + \"challenges\": [. + {. + \"question\": \"'In \n1963, live on \\\"The Art Linkletter Show\\\", this company served its billionth burger'\",\n. [SEP] [SEP] Hottest 'monetdblite' Answers - Stack Overflow [SEP] ... \"4680,12/31/2004,Jeopardy!,THE COMPANY LINE,$200 ,\\\"In 1963, live on \\\"\\\"\nThe Art Linkletter Show\\\"\\\", this company served its billionth burger\\\",... r monetdb\n... [SEP] [SEP] History of the Hamburger - Think Magazine [SEP] Oct 29, 2009 ... If a ground beef patty served between two slices of bread is a hamburger, then \ncredit goes to .... In 1963 McDonald's serves its one-billionth burger on The Art \nLinkletter Show. ... It was the first time a company used another company's name \nin an advert so McDonald's ... Mixing makes our live's much easier! [SEP] [SEP] Food Timeline: 1961 to 1965 - Food History Events [SEP] They purchased $95 in food stamps for their 15-person household. ... 1961 \nQuaker Oats Company introduced 'Life' cereal. ... Mills Country Club, in a live \nunrehearsed cooking show titled 'Deadline for Dinner.' ... 1963 The one billionth \nMcDonald's hamburger was served by Ray Kroc on the Art ... Burgers are 15 \ncents each. [SEP] [SEP] August, 2012 | Tom Feltenstein's Power Marketing Academy [SEP] In every company, people are going to make fun of the boss; it's just that in good \n.... Ray Kroc serves the 500 millionth burger to Under Secretary of Agriculture, \nCharles ... McDonald's serves its one-billionth hamburger on the Art Linkletter \nShow. ... and sales of $130 million and issues its first annual report for the year \n1963. [SEP] [SEP] The Most Popular Product Of All Time - Slashdot [SEP] Jul 28, 2016 ... Apple announced Wednesday that it has sold more than one billion ... 1 billionth \nhamburger (famously, on the Art Linkletter Show in 1963), ... That one company \nmakes all the different versions of iphone and ..... In 2013 it was predicted \nMcDonalds would be serving their 300 billionth burger (of any type). [SEP] [SEP] PPT McDonald PowerPoint presentation | free to download - id ... [SEP] ... modified by: Tech Support Document presentation format: On-screen Show \nCompany: Charlottesville City Schools ... And it's free to register and free to log in! "
}
],
"title": ""
},
{
"paragraphs": [
{
"qas": [
{
"question": "Outlaw: \"Murdered by a traitor and a coward whose name is not worthy to appear here\"",
"id": "f0bc45a4dd7a4d8abf91a5e4fb25fe57",
"answers": [
{
"text": "Jesse James",
"answer_start": 16
}
]
}
],
"context": "[SEP] [SEP] 18: Jesse James: \"Murdered by a traitor and a coward whose name ... [SEP] In the Wild West, Jesse James was legendary -- a Robin Hood-like figure who the \npublic loved and lawmakers hated. The outlaw's notorious bank robbing spree... [SEP] [SEP] Jesse James is murdered - Apr 03, 1882 - HISTORY.com [SEP] Jesse James, one of America's most notorious outlaws, is shot to death by Robert \nFord, a member of his gang who hoped to collect the bounty on Jesse's head. [SEP] [SEP] Crime History: Outlaw Jesse James killed by 'coward' Robert Ford ... [SEP] Apr 2, 2009 ... On this day, April 3, 1882, outlaw Jesse James is shot to death by fellow gang \nmember Robert Ford. For 16 years, the James Gang, led by... [SEP] [SEP] That Dirty Little Coward That Shot Mr. Howard - chimesfreedom [SEP] Apr 2, 2014 ... The song Jesse James, with the lyrics quoted above, referred to the outlaw \nJesse Woodson James by his famous real name and by the alias... [SEP] [SEP] Brad Pitt Archives - chimesfreedom [SEP] The song Jesse James, with the lyrics quoted above, referred to the outlaw \nJesse Woodson James by his famous real name and by the alias he was using at\n... [SEP] [SEP] Robert Ford: The Man Who Shot Jesse James Civil War Saga [SEP] Oct 18, 2011 ... Robert Newton Ford was an outlaw from Missouri born on January 31, 1862. Like \nmany young Missouri men of his time, he grew up admiring... [SEP] [SEP] The Assassination of Jesse James by the Coward Robert Ford - IMDb [SEP] Others films about this legendary outlaw are : The classic version (1939) titled \nJesse James(1939) with Tyrone Power and Henry Fonda, The return of Frank... [SEP] [SEP] Blog - Famous Last Words: The Best Epitaphs Ever To Grace A ... [SEP] Mar 20, 2015 ... Jesse James: Murdered by a traitor and a coward whose name is not worthy to \nappear here. Jesse James was known for his many crimes and... [SEP] [SEP] Famous Last Words | Britannica Blog [SEP] Nov 2, 2011 ... Murdered by a Traitor and a Coward Whose Name is Not Worthy to Appear Here\n. Though a memorable epitaph, James's body has been... [SEP] [SEP] Jesse James by on Prezi [SEP] Membership fluctuated from robbery to robbery, as the outlaws' raids were \nusually separated by many months. At various times, it included the Younger \nBrothers... [SEP] [SEP] 11/14/2012 - Welcome to the Boonville Herald [SEP] Nov 14, 2012 ... On the tombstone of the outlaw Jesse James is this: Murdered by a traitor and a \ncoward whose name is not worthy to appear here. [SEP] [SEP] A Dynasty of Western Outlaws - Paul Iselin Wellman - Google Books [SEP] Yet it is less morbid, better documented, and more interpretively written than \nearlier galleries of western outlaws.\"-The New York Times Book Review. [SEP] [SEP] Elder Family History Research - Frank and Jesse ... - Family Trail [SEP] Federal troops were searching for them, and although regular Southern troops \nwere pardoned, guerrillas were considered to be outlaws, if found they were shot. [SEP] [SEP] Epitaphs - Wikiquote [SEP] \"Gone but not forgotten.\" Buried beside, and sharing a tombstone with, his \nbrother Marvin (aka \"Buck\"). Outlaw, bank robber and partner of Bonnie Parker. [SEP] [SEP] culture/the-history-of-man-jesse-james-is-murdered-today-in-1882/ [SEP] Apr 3, 2016 ... ... 28 days, Murdered by a traitor and a coward whose name is not worthy to \nappear here. The teenage James brothers joined up with southern... [SEP] [SEP] Jesse James: American Outlaw or American Hero? | Facebook [SEP] Jesse James: American Outlaw or American Hero? July 7, 2014 . Murdered by a \ntraitor and a coward whose name is not worthy to appear here. ~ words on... [SEP] [SEP] Lynn Ashby: Tomb it may concern, of obits and epitaphs [SEP] Aug 29, 2013 ... James' epitaph contains this line: Murdered by a traitor and a coward whose \nname is not worthy to appear here"
}
],
"title": ""
},
{
"paragraphs": [
{
"qas": [
{
"question": "Steven Tyler of this band lent his steamin' vocals to \"Train Kept A-Rollin'\", first popularized by the Yardbirds",
"id": "2dc90736586049d298a10ed93567f0db",
"answers": [
{
"text": "Aerosmith",
"answer_start": 362
}
]
}
],
"context": "[SEP] [SEP] Train Kept A-Rollin' - Wikipedia [SEP] Train Kept A-Rollin is a song first recorded by American jazz and rhythm and \nblues musician ... In 1965, the Yardbirds popularized the song as an early \npsychedelic blues rock song, due ... Tiny Bradshaw and his band first recorded \"\nThe Train Kept A-Rollin'\" in 1951. ..... Steve Tyler and Joe Perry at 2010 \nAerosmith concert. [SEP] [SEP] The Yardbirds - Wikipedia [SEP] The Yardbirds are an English rock band formed in London in 1963 that had a \nstring of hits ... Beck played his first gig with the Yardbirds only two days after \nClapton's ... the U.S. B-side, \"The Nazz Are Blue\", features a rare lead vocal by \nBeck. .... of forming a band with Jimmy Page and Steve Winwood and he would \nname it... [SEP] [SEP] Train Kept A-Rollin' by Aerosmith Songfacts [SEP] This is a cover of a song popularized by The Yardbirds in the mid-1960s when \nJimmy ... The Yardbirds played \"Train Kept A-Rollin'\" at that show, and Steven \nTyler ... Burnette Trio, one of the first (and many say best) Rockabilly bands- in \n1956. .... Elton John also threw fake live concert sounds on to his studio recording \nof... [SEP] [SEP] Jeopary Questions page 2246 - -O-O-O - TriviaBistro.com [SEP] MUSICAL TRAINS: This \"Modern Girl\" first hit the Billboard Top 10 with \"Morning \nTrain (Nine To Five)\" ... MUSICAL TRAINS: Steven Tyler of this band lent his \nsteamin' vocals to \"Train Kept A-Rollin'\", first popularized by the Yardbirds. [SEP] [SEP] June 29, 2008 [SEP] 9, 1087, this English king died at Rouyn in Normandy after falling from his horse \nWilliam the Conqueror ... Steven Tyler of this band lent his steamin' vocals to \"\nTrain Kept A-Rollin'\", first popularized by the Yardbirds Aerosmith Born in Ghana \nin... [SEP] [SEP] Rock This Way || Aerosmith News - MARCH 2004 [SEP] Tony and his friends, including Aerosmith frontman Steven Tyler's daughter, .... \nGuitarist Joe Perry handles lead vocals on the dark and nasty \"Back Back Train. \n...... Aretha Franklin first popularized the song under the title \"I Never Loved a \nMan ...... of the old blues number \"Train Kept a Rollin'\" from its second album in \n1974. [SEP] [SEP] Performances and adaptations of The Star-Spangled Banner - Revolvy [SEP] Other methods have included singing the anthem using different vocal ranges ... \nVersions Igor Stravinsky 's first of his four 1941 arrangements of the ... Steven \nTyler of Aerosmith was invited to sing the national anthem at the ..... Train Kept A-\nRollin' ... In 1965, the Yardbirds popularized the song as an early psychedelic \nblues... [SEP] [SEP] Full text of \"Musical_DropBooks\" - Internet Archive [SEP] His first release went nowhere, but his second, You Want It, You Got It (1981), \nsold well ..... From Unknowns to Superstars Singer Steve Tyler and guitarist Joe \nPerry .... \"Jaded\" before segueing into the powerhouse classic, \"Train Kept a \nRollin. ...... childlike voice lent an eerie innocence to the band's sound in the early \ndays;... [SEP] [SEP] 1001 Songs You Must Hear Before You Die - Scribd [SEP] Clapton's electrified versionCrossroadswas a staple of his band .... and \nvocal harmonies as warm as a mug of steaming java, they rapidly became a ..... \nChurchill Kohlman's song, first made popular by Ruth Casey, had long since lost \nits ...... which coupled Tiny Bradshaw's Train Kept A-Rollin' with a composition \nby... [SEP] [SEP] record reviews, rants and raves - home sweet home [SEP] Although the Bad Boys from Beantown's first greatest hits package was for from \nperfect, ... STEVEN TYLER's swaggering sleazoid screech and the dirty blooze-\nrock axe ...... TRAIN KEPT A ROLLIN', (which BECK's old band THE YARDBIRDS \n...... technically perfect singing voice of the three, also lent his pipes to YOUNG's... [SEP] [SEP] Rest in Peace to Aaron Swartz beloved friend of the internet ... [SEP] Penguin "
}
],
"title": ""
},
{
"paragraphs": [
{
"qas": [
{
"question": "Beyond ovoid abandonment, beyond ovoid betrayal... you won't believe the ending when he \"Hatches the Egg\"",
"id": "9aa1a16d4d1c4d8c874dc8cad32d2c49",
"answers": [
{
"text": "Horton",
"answer_start": 1088
}
]
}
],
"context": "[SEP] [SEP] I BROKED IT by outofculture Pull Request #1 petekinnecom ... [SEP] 1: Lettered in hoops, football & lacrosse at Syracuse & if you think he couldn't act, \n..... ovoid abandonment, beyond ovoid betrayal... you won't believe the ending... [SEP] [SEP] Literature / Parental Abandonment - TV Tropes [SEP] In Taran Wanderer he tries to find out who his parents were, and ends up losing \ninterest. ... Malcolm Dresden's death is way beyond being suspicious. ..... and \nuncle, also emotionally abandon him to the extent that he figures they won't \nbother to .... on the edge of town, except for a scary, silent groundskeeper who \nthey avoid. [SEP] [SEP] Channel logs for #basilmarket on Saturday the 19th of September ... [SEP] Sep 18, 2015 ... ... ovoid abandonment, beyond ovoid betrayal... you won't believe the ending \nwhen he \"\"Hatches the Egg\",\"Horton\". [02:13:33], <Metawe> wtf. [SEP] [SEP] Seuss, Dr. 1904-1991 - Encyclopedia.com [SEP] And when you're alone, there's a very good chance / you'll meet things that scare \nyou ... He won a Lewis Carroll Shelf Award in 1958 for Horton Hatches the Egg \nand in ..... Seuss's greatest anti-alphabet is, of course, On beyond Zebra (1955) in \n...... Blake's art and Mitchell's criticism avoid the metaphor of the \"sister arts\" and... [SEP] [SEP] Parental Abandonment/Literature - All The Tropes [SEP] Nov 15, 2015 ... Everything About Fiction You Never Wanted to Know. < Parental ... Briar: Street \nrat whose mother was murdered when he was four. Evvy: Sold as ... In Taran \nWanderer he tries to find out who his parents were, and ends up losing interest. \nPrincess ... Malcolm Dresden's death is way beyond being suspicious. [SEP] [SEP] Thank You - The Scholastic Store [SEP] He gets shooed away from a hamburger, a pizza, a dog's bones, and even ..... \nHowever, this betrayal is overshadowed when Amy and Dan make an even more \n.... You won't believe why this old lady swallows a shell, a crab, a fish, a gull, a \npail, .... the wolf pup who rose up to change forevever the Wolves of the Beyond. [SEP] [SEP] Authors Me and My Kindle [SEP] But if you read Irving's book all the way to the end, there's an even bigger \nsurprise. ... but in the end he couldn't successfully navigate himself into their \nsociety. ..... It opens with a funny story about hiding in the back of a van to avoid \nthe press on ..... slowly as they face an even bigger challenge beyond the \nbaseball diamond. [SEP] [SEP] March | 2016 | A. B. Funkhauser, Author [SEP] Mar 21, 2016 ... During his quest he confronts Virginia City's most prominent mine ... Thank you \nso much Lady Brenda for stopping by and sharing your latest success with us. ..... \nshot in the head, caused her to avoid being seen by anybody in uniform. .... She \nand a few others were too tired to flee and slept beyond the... [SEP] [SEP] Fiction | A. B. Funkhauser, Author [SEP] It's a heart-beating journey through mystery, murder, betrayal and passionate \nlove. .... being shot in the head, caused her to avoid being seen by anybody in \nuniform. .... She and a few others were too tired to flee and slept beyond the \nallowed ..... conclusion to the Harvester Series takes several turns you won't see \ncoming! [SEP] [SEP] audre lordezami, sister outsider, undersong [SEP] Cook for moving history beyond nightmare into structures for the future .... knew \nabout Paradise Plums-hard, oval candies, cherry-red on one side ..... Mrs. Baker \nread me Madeline, and Horton Hatches the Egg, both of .... tell your mother that \nyou won't even try. ...... If I ran out the other end of the row he would not follow,. [SEP] [SEP] The Motion Picture Production Code film ... - Films On Super 8 [SEP] 172, 1942, 279, So You Want to Give Up Smoking, So "
}
],
"title": ""
},
{
"paragraphs": [
{
"qas": [
{
"question": "\"500 Hats\"... 500 ways to die. On July 4th, this young boy will defy a king... & become a legend",
"id": "4113be8423d14a4790a5c5e569d4595a",
"answers": [
{
"text": "Bartholomew Cubbins",
"answer_start": 445
}
]
}
],
"context": "[SEP] [SEP] I BROKED IT by outofculture Pull Request #1 petekinnecom ... [SEP] + \"question\": \"'<a href=\\\"http://www.j-archive.com/media/2004-12-31_DJ_24.mp3\\\n\">\\\"500 Hats\\\"... 500 ways to die. On July 4th, this young boy will defy a king. [SEP] [SEP] Board Approved Materials 2005 - 2015 - cloudfront.net [SEP] Feb 13, 2013 ... Print 1,000 Places to see in the USA and Canada before you die ..... Print 4th of \nJuly, The ... Video 500 Hats Of Bartholomew Cubbins ... Print 9 x 13 the pan that \ncan : more than 370 family favorites to fit ...... Print Alien invasion : invasive \nspecies become major menaces ...... Print Boy who couldn't die, The. [SEP] [SEP] Loganberry Books: Stump the Bookseller: AB [SEP] I had a book when I was a boy that was the illustrated alphabet. ...... But should \nthe water spill out, they become very weak and may even die. This is the story of \na ...... Adolescent boy is at picnic in park with family on July 4th. Heads ..... This is \na three volume set, The 500 Hats of Bartholomew Cubbins is in Volume 1. \nThere's... [SEP] [SEP] book lists from children's literature, briefly - EveryDay Learners [SEP] to the land of the Wild Things, where he becomes their king. .... 12 Ways to Get to \n11. ... Introduces colors and shapes with illustrations of shapes on die-cut pages \nthat ..... A young boy is worried about what will happen to his body when he hears \n.... The 500 Hats of Bartholomew Cubbins. ...... Tales of a Fourth Grade Nothing. [SEP] [SEP] Wagner Dream, un livret de Jean-Claude Carrire (videmment ... [SEP] Story of a boy and his dragon and their adventures...amazing read! ...... William \nJames: \"Could the young but realize how soon they will become mere ...... It was \nJuly 4th (my birthday) - around that time, Los Angeles is usually enjoying ..... \nStreet (1937) The 500 Hats Of Bartholomew Cubbins (1938) The King's Stilts (\n1939)... [SEP] [SEP] Last photograph taken in a studio setting of Abraham Lincoln ... [SEP] Abe died on April 15, 1865 in Washington, D.C. Abraham Lincoln was shot and \n.... Cole became the second African American woman to become a doctor in \n1867. ...... The United States Flag Is Folded 13 Times Because. patriotic july 4th \njuly ..... Street (1937) The 500 Hats Of Bartholomew Cubbins (1938) The King's \nStilts... [SEP] [SEP] ebooksforlessph_ [SEP] Feb 11, 2016 ... 11 Ways to Forget Your Ex .... I Married The Ice King - Ella Strange .... Three \nWords, Eight Letters, If I Say it, Will I be Yours ... Boys, Bears, and a Serious Pair \nof Hiking Boots .... Bright Young Things 3 - The Lucky Ones .... Blue Eyes Trilogy \n2 - Becoming a Legend ..... The 500 Hats of Bartholomew Cubbins. [SEP] [SEP] Book Sale Sample Titles - Davenport Schools [SEP] Young, Steve, 1947-. 2006 ... 50 ways & 50 reasons you can abstain from sex: ... \nThe 500 hats of Bartholomew Cubbins .... AIDS : an all-about guide for young \nadults .... The amazing and death defying diary of Eugene ..... Apple pie 4th of \nJuly ..... Bad boys. Palatini, Margie. 2003. Bad girls don't die. Alender, Katie. 2009\n. [SEP] [SEP] celebrate your worth - Google Search | Celebrate your Life not your ... [SEP] I hope these children's books about disabilities will help you begin some \nimportant ... This comes in handy when trying to explain to young children who \nalways ask .... #MickJagger #CharlieWatts #RonWood #Rock #Legend #Quote #\nLife #Book ...... Kids Books that Celebrate Fireworks, the Flag and July 4th - \nInspire... [SEP] [SEP] WANTED Known - Papers Past [SEP] time a young men'sChristian Association tone- .... vour Money go as far as you \ncan, and buy .... 4th July, for the Erection of a Chimney ..... See our superior Bal- \n... hive Watertights, 12s 6d,; Boys' extra .... L500,000. "
}
],
"title": ""
},
{
"paragraphs": [
{
"qas": [
{
"question": "When it began on Pan Am & Qantas in the late '70s, it was basically a roped-off part of the economy cabin with free drinks",
"id": "4b7a7b560b094e309c10e8dd11e9c6fc",
"answers": [
{
"text": "business class",
"answer_start": 442
}
]
}
],
"context": "[SEP] [SEP] I BROKED IT by outofculture Pull Request #1 petekinnecom ... [SEP] + \"question\": \"'When it began on Pan Am & Qantas in the late '70s, it was \nbasically a roped-off part of the economy cabin with free drinks'\",. + \"value\": \"$800\n\",. [SEP] [SEP] Jeopardy! #1 Flashcards | Quizlet [SEP] AIRLINE TRAVEL 800 When it began on Pan Am & Qantas in the late '70s, it was \nbasically a roped-off part of the economy cabin with free drinks. business class. [SEP] [SEP] Boeing 747 Classic Cabin Schemes - YouTube [SEP] May 27, 2008 - 4 min - Uploaded by LakeNipissingIn the 70s and 80s, if you wanted a deck of playing cards, it was .... I was quite young in the ... [SEP] [SEP] The Glamorous Airline Lounges In The Sky From The 1970s | VinePair [SEP] Dec 7, 2015 ... Today domestic first class means free drinks, free (perhaps edible) food, and a ... \nPan Am was the first airline to fly a 747 commercially, on January 22, 1970, ... \ninstalling Polynesian Pubs in the economy class section of their DC-10s and \n747s. ... At the front of the coach cabin you could find a small lounge... [SEP] [SEP] Traveling in a Boeing 747 in the 1970s was pretty damn awesome [SEP] Jan 19, 2014 ... The idea of the Boeing 747 started in the 1960s, when Pan Am ... To demonstrate \ntheir idea, they built this prototype of their idea for a 400-seater's economy class: \n... In part, you can say that it was us and our stupid carry-on bags that screwed \nairplanes. .... Say Lufthansa's 747-400 retrofitted First cabin. [SEP] [SEP] Chris McGinnis, Author at TravelSkills - Page 5 of 13 [SEP] Feb 1, 2015 ... Passengers on many Alaska Airlines flights will continue to get free premium ...... [\nVirgin Atlantic offers a premium economy section, a much more robust ...... \nimprovement over its old digs at the long gone PanAm Worldport. ...... Both \nstaircases were roped off during the flight to prevent mixing of the classes. [SEP] [SEP] Airliner and Jetliner Manufacturing Archives - Page 8 of 16 - Airchive [SEP] Oct 2, 2014 ... The airline began non-stop, thrice weekly service from Baku to New ... Certificate \nof Pan Am's first 707 flight at Pan Am First Flight Out Store in ... I remember \ndomestic flights in Colombia and form Miami to Bogota in the late 1970s and ...... \nIn-Flight Review: LAN Airlines Boeing 787-8 Part 1 Economy Class. [SEP] [SEP] Airlines Archives - Page 7 of 12 - TravelSkills [SEP] Mar 8, 2015 ... American Airlines last week kicked off new daily service between Miami ..... (The \nflights are part of Virgin's partnership with Delta, so you can ...... With an annual \nlounge membership, the value of these free drinks can .... peace of the first class \ncabin, which is separated from economy by a galley or lavatory. [SEP] [SEP] Airports Archives - Page 5 of 7 - TravelSkills [SEP] Nov 24, 2014 ... Only two major U.S. airlines let all passengers check a bag for free JetBlue ...... \nBoth staircases were roped off during the flight to prevent mixing of the classes. \n...... part of the reason why Delta's flight is operating on what's basically a \nseasonal basis. ...... >Nostalgia buffs can have dinner in a Pan Am 747. [SEP] [SEP] for senior australians with a zest for life - Cld.bz [SEP] Aug 6, 2014 ... at arV, I am with and around people who ..... passers-by as part of the inaugural \nNational ... Lodge took pride in showing off their home and the .... purchase from \nlate 2015 with completion .... and economic challenges of the .... aFter tHe Card \ngame, Betty began ..... care, is basically why I got into aged care. [SEP] [SEP] Oi Vietnam Issue #4 (June 2013) by Oi Vietnam - issuu [SEP] Jun 1, 2013 ... by Panpimon Suwannapongse, the Thai Consul General in Ho Chi Minh ... steak \nand free flow of beverages and Caesar drinks "
}
],
"title": ""
},
{
"paragraphs": [
{
"qas": [
{
"question": "Barbra Streisand knows he played Lt. Col. Bill \"Raider\" Kelly on \"Pensacola: Wings of Gold\"",
"id": "1426088cb6494263906e1111064d5d72",
"answers": [
{
"text": "James Brolin",
"answer_start": 415
}
]
}
],
"context": "[SEP] [SEP] Pensacola: Wings of Gold - Show News, Reviews, Recaps and ... [SEP] Lt. Colonel Bill Kelly takes over the Sea Dragons, a 4 person unit consisting of a \nfighter pilot, ... And he tries to reconnect with his estranged teen age daughter. [SEP] [SEP] Jeopardy! #1 Flashcards | Quizlet [SEP] TV ACTORS & ROLES 200 Barbra Streisand knows he played Lt. Col. Bill \"\nRaider\" Kelly on \"Pensacola: Wings of Gold\". James Brolin. [SEP] [SEP] James Brolin Naked [SEP] James Brolin. James Brolin and Barbara Streisand. The Amityville Horror. James \nBrolin and Barbara Streisand. James Brolin and Barbara Streisand... [SEP] [SEP] Pensacola: Wing of Gold | Desenhos e Seriados | TV Sinopse [SEP] Pensacola: Wings of Gold foi uma srie de televiso do gnero aventura e drama\n, criado ... O projeto de uma nova srie nasceu na cabea de Barbra Streisand e \nseu marido James ... James Brolin como Tenente Coronel Bill Raider Kelly. [SEP] [SEP] Star Trek Actors' Other Roles FAQ [01/15] [INTRO] - Google Groups [SEP] Sep 9, 2003 ... William Shatner James B. Sikking Liam Sullivan ** .... He Who Gets Slapped (\n1946) ..... Star Trek II: The Wrath of Khan (1982) |Lt. Saavik| ... Married with \nChildren {Kelly Takes a Shot} |Freyer Tuck| ...... Star Trek VI: The Undiscovered \nCountry (1991) |Col. ...... A Tribute to Barbra Streisand (2001) |Himself| [SEP] [SEP] Nordisk Mobiltelefon - HeiNER - The Heidelberg Named Entity ... [SEP] Estadio Nou Camp William Giauque Vic Amuso Scream (album) Tasman Clingan \n..... USS Panay (PR-5) Grace Santiago Kelly Scott McAllen Independent School \nDistrict ...... Turalie Juozapin Hill Marked Woman Good to Finally Know \nGangaikondan ...... Hall of Memory (Birmingham) Oliver Percy Bernard Guy Bates \nLt. Col. [SEP] [SEP] Rental car vegas - [SEP] His starring role is in the spotlight again after he was nominated for a Golden ...... \nI didn't know where to start, where to finish, so it was a mess at the beginning. "
}
],
"title": ""
},
{
"paragraphs": [
{
"qas": [
{
"question": "No. 2: 1912 Olympian; football star at Carlisle Indian School; 6 MLB seasons with the Reds, Giants & Braves",
"id": "3e24b66b256a4e28ae00280d8960803a",
"answers": [
{
"text": "Jim Thorpe",
"answer_start": 209
}
]
}
],
"context": "[SEP] [SEP] Assignment 13 [SEP] Nov 30, 2014 ... 2: 1912 Olympian; football star at Carlisle Indian School; 6 MLB seasons with the \nReds, Giants & Braves ## 3 The city of Yuma in this state has... [SEP] [SEP] Jim Thorpe - Wikipedia [SEP] James Francis \"Jim\" Thorpe was an American athlete and Olympic gold medalist. \nA member of ... Thorpe attended the Sac and Fox Indian Agency school in Stroud, \nOklahoma, with his ... Carlisle's 1912 record included a 276 victory over Army. \n.... to the Giants in 1917 but was sold to the Cincinnati Reds early in the season. [SEP] [SEP] amazon web services - put_records() only accepts keyword ... [SEP] from __future__ import print_function # Python 2/3 compatibility import boto3 \nimport json import decimal #kinesis = boto3.resource('kinesis', ... 2: 1912 \nOlympian; football star at Carlisle Indian School; 6 MLB seasons with the Reds, \nGiants & Braves'\", \"round\": \"DDSSS! .... Not the answer you're looking for? [SEP] [SEP] Jeopary Questions page 2247 - EVERYBODY TALKS ABOUT IT ... [SEP] ESPN's TOP 10 ALL-TIME ATHLETES: No. 1: Lettered in hoops, football & \nlacrosse at Syracuse & if you think he couldn't act, ask his 11 \"unclean\" buddies \nHISTORY: In 1000 Rajaraja I ... 10 ALL-TIME ATHLETES: No. 2: 1912 Olympian; \nfootball star at Carlisle Indian School; 6 MLB seasons with the Reds, Giants & \nBraves. [SEP] [SEP] Ten Greatest American Summer Olympians: No. 7 Jim Thorpe | The ... [SEP] Aug 2, 2016 ... Jim Thorpe dominated the 1912 Summer Olympics and was one of ... schools, \nbut wound up attending Carlisle Indian Industrial School, ... Additionally, he \nplayed six seasons in Major League Baseball for ... York Giants, Cincinnati Reds \nand Boston Braves before leaving the .... 2 hours ago Ryan Phillips. [SEP] [SEP] Jim Thorpe Bio, Stats, and Results | Olympics at Sports-Reference.com [SEP] In June 1904, Jim Thorpe entered the second Indian school, Carlisle Indian \nSchool ... There he became a legend as both a football player and track & field \nathlete. ... in the United States were the 1912 Olympic Trials, but Thorpe did not \ncompete. ... playing 6 seasons with the Giants, some time with the Cincinnati \nReds, and... [SEP] [SEP] Jim Thorpe facts, information, pictures | Encyclopedia.com articles ... [SEP] Make research projects and school reports about Jim Thorpe easy with credible \n... It was famous for its football team, the Carlisle Indians, which regularly beat the \nbest ... The Indians finished the season 10-2-1, outscoring their opponents 212 to \n55. .... The year 1912 brought Thorpe not only two Olympic gold medals, but it... [SEP] [SEP] Jim Thorpe - New World Encyclopedia [SEP] Aug 22, 2016 ... Thorpe participated in the 1912 Summer Olympics. ... Height, 6 ft 1 in (1.85 m) ... \nNew York Giants ... professional football, played Major League Baseball and also \nhad a brief ... 1 Early life; 2 A rising star; 3 An Olympic hero; 4 Declared a ... Jim \nThorpe in Carlisle Indian Industrial School uniform, about 1909. [SEP] [SEP] ESPN.com: Page 2 : The best all-around athletes [SEP] MLB Front Page ... The Iron Horse was 6 feet and 200 pounds of pure muscle -- \nin columnist Jim .... No other athlete has ever been drafted in all three pro sports. \n2. ... best football player of his era, leading the Carlisle Indian School team to the \n1912 ... putting in six major-league seasons with the Giants, Reds and Braves \nand... [SEP] [SEP] Jim Thorpe - Famous Basketball Players, Hockey Player, Track and ... [SEP] Feb 5, 2016 ... Famous Basketball Players, Hockey Player, Track and Field Athlete, Baseball ... \nthe pentathlon and decathlon at the 1912 Olympics but was stripped of his gold ... \nAn All-American in football at the Carlisle Indian School, he won the ... career \nwith the Giants, Cincinnati Reds and Boston Braves, although he... "
}
],
"title": ""
},
{
"paragraphs": [
{
"qas": [
{
"question": "\"And away we go\"",
"id": "fda746982dd848b89c40b3fc8a8e56d2",
"answers": [
{
"text": "Jackie Gleason",
"answer_start": 34
}
]
}
],
"context": "[SEP] [SEP] \"And Away We Go\" With Jackie Gleason #1 - YouTube [SEP] Aug 11, 2014 - 10 min - Uploaded by Comic Book Long BoxThe Honeymooners first appeared on Cavalcade of Stars on October 5, 1951, with Carney in a ... [SEP] [SEP] And Away We Go by Scott & Brendo on Spotify [SEP] Listen to And Away We Go now. Listen to And Away We Go in full in the Spotify \napp. Play on Spotify. 2013 Scott & Brendo; 2013 Scott & Brendo. Legal [SEP] [SEP] And Away We Go!: Migy: 9780805099010: Amazon.com: Books [SEP] Editorial Reviews. From School Library Journal. PRES-GR 2STAR-GAZING \nMR. ... This item:And Away We Go! by Migy Hardcover $14.60. Only 12 left in \nstock... [SEP] [SEP] And Away We Go - Dramatists Play Service, Inc. [SEP] THE STORY: Times change, but life in the theatre remains the same: chaotic, \nsometimes brutal, but often euphoric, too. AND AWAY WE GO jumps through time\n... [SEP] [SEP] And Away We Go! by Migy Reviews, Discussion, Bookclubs, Lists [SEP] And Away We Go! has 219 ratings and 46 reviews. Mr. Fox is going to the moon! \nAway he goes in his hot air balloon. . . . But wait! Can Elephant come too?... [SEP] [SEP] And Away We Go! | Mr. Migy | Macmillan [SEP] Mr. Fox is going to the moon! Away he goes in his hot air balloon. . . . But wait! \nCan Elephant come too? Sure! Let's bring along some pizza. What about Giraffe? [SEP] [SEP] ... and away we go! [SEP] Feb 25, 2015 ... This month we celebrated Andrew's 5th birthday with a Lego party for a bunch of \nhis best buddies here in Baltimore. His actual birthday is at the... [SEP] [SEP] Jackie Gleason - Wikipedia [SEP] John Herbert \"Jackie\" Gleason (February 26, 1916 June 24, 1987) was an \nAmerican ..... In 2000 a statue of him as Ralph Kramden in \"And away we go! [SEP] [SEP] \"Dallas\" And Away We Go! (TV Episode 1989) - IMDb [SEP] Drama... [SEP] [SEP] \"7th Heaven\" And Away We Go (TV Episode 2007) - IMDb [SEP] Drama .... Discuss And Away We Go (2007) on the IMDb message boards . \nGetting Started | Contributor Zone ... [SEP] [SEP] All Around and Away We Go on Vimeo [SEP] Nov 16, 2010 - 4 minDirected by Mike Luciano Director of Photography - Ian Perlman Animation - Andrea Estella Edit ... [SEP] [SEP] Terrence McNally's 'And Away We Go,' at the Pearl - The New York ... [SEP] Nov 27, 2013 ... Terrence McNally plants a big kiss on his lifelong love the theater in his \nlatest play, And Away We Go, a time-traveling romp set... [SEP] [SEP] COHEED AND CAMBRIA LYRICS - Away We Go - A-Z Lyrics [SEP] And away we go. My little Jersey girl. If we could escape this innocence. Trapped \nbetween the fears and the words we kept. We'll live out our later years. Here in... [SEP] [SEP] And Away We Go ... - Talking Points Memo [SEP] May 3, 2016 ... Fundamentally, our next president will need to do two things: keep our nation \nsafe in a dangerous world and help working families get ahead... [SEP] [SEP] And Away We Go! by Miguel Ornia-Blanco; Migy - FictionDB [SEP] In the tradition of classic cumulative tales like \"Mr. Gumpy's Outing,\" Migy's \"And \nAway We Go!\" is an unforgettable debut picture book from an exciting new talent. [SEP] [SEP] And Away We Go - TheaterMania.com [SEP] Nov 24, 2013 ... Theater nerds, rejoice! All others? Well, you may find yourself a bit confused \nwhen you see Terrence McNally's new play, And Away We Go,... [SEP] [SEP] And Away We Go Travel | Facebook [SEP] 2851 NE 5th Avenue. Boca Raton, FL 33431. [SEP] [SEP] Twin Sister - All Around And Away We Go Lyrics | MetroLyrics [SEP] "
}
],
"title": ""
},
{
"paragraphs": [
{
"qas": [
{
"question": "Cows regurgitate this from the first stomach to the mouth & chew it again",
"id": "aa0179e972f94530a8125180d25aa3b4",
"answers": [
{
"text": "cud",
"answer_start": 1640
}
]
}
],
"context": "[SEP] [SEP] What Would Cause a Cow to Regurgitate Its Cud? | Animals - mom.me [SEP] After chewing and swallowing, the cow's food is passed into the first two ... back \ninto the cow's mouth in the form of \"cud\" for her to chew on again and again. [SEP] [SEP] Animal Health Literacy > How Cows Eat Grass - FDA [SEP] Jul 1, 2016 ... After we chew and swallow our food, the stomach serves as a ... In the front of the \nmouth, teeth (known as incisors) are only located on the bottom jaw. ... first two \nsections of a cow's stomach, the reticulum and the rumen. [SEP] [SEP] Rumination - Food, Stomach, Rumen, and Plant - JRank Articles [SEP] The stomach of these grazing herbivores consists of four chambersthe rumen, \nthe ... rapidly without chewing, and later regurgitates it (brings it back up into the \nmouth), ... form during bacterial fermentation in the first two chambersthe \nreticulorumen. ... Certain cows are particularly susceptible to this, and farmers \noften lose... [SEP] [SEP] Cud - Wikipedia [SEP] Cud is a portion of food that returns from a ruminant's stomach to the mouth to be \nchewed for the second time. More accurately, it is a bolus of semi-degraded food \nregurgitated from the ... The alimentary canal of ruminants, such as cattle, goats, \nsheep, alpacas, and antelope, are unable to produce the enzymes required to... [SEP] [SEP] rumin-, rumina- - Word Information [SEP] Partly digested food that ruminants (cattle, goats, etc.) return to the mouth, after it \nhas passed into the first stomach, to chew again as an aid to more ... in the \nsecond chamber, or the \"reticulum\", and then regurgitated to be chewed as the \ncud. [SEP] [SEP] The Dairy Mom: Content Cows Chew Cud [SEP] Aug 9, 2011 ... Cows are ruminants, which means their stomach contains four compartments: the \nrumen, reticulum, omasum and ... First they eat the feed, than they regurgitate the \npartially digested food and chew it again in the form of cud. The four ... Ruminants \nlater regurgitate it into the mouth where they chew their cud. [SEP] [SEP] Chew It Twice - 4-H Youth Development [SEP] into smaller pieces. Then the cow regurgitates those pieces so it can chew them \nagain. The partly-digested food that comes back into the ruminant's mouth is. [SEP] [SEP] Milk - KarenHurd.com [SEP] A cow starts by chewing the grass. ... The cow actually vomits the chewed grass \nback up into its mouth and chews it again because it cannot yet be digested until \nit is chewed many, many times. ... The reticulum is the cow's \"first stomach. ... feed \na human baby cow's milk, the baby has a strong tendency to regurgitate or what... [SEP] [SEP] What is cud, and why do cattle chew it? | Cattle Empire [SEP] Dec 20, 2013 ... The reason is because cows must chew their food twice in order to digest ... \nCattle are ruminant animals, this mean their stomach contains four compartments\n: ... When a cow first takes a bite, it chews just enough to moisten the food. ... back \nup to the cow's mouth, where it is re-chewed and swallowed again,... [SEP] [SEP] How Animals Mechanically Break Down Food - For Dummies [SEP] Humans have one stomach that fills with hydrochloric acid and enzymes to help \n... So, when a cow swallows some grass, the chewed grass first enters the ... \nCows then regurgitate (spit up) the material from the rumen, called cud, back into \n... The cud is swallowed again, and it re-enters the rumen. ... Do chickens have \nlips? [SEP] [SEP] Chewing the cud - definition of chewing the cud by The Free Dictionary [SEP] Food regurgitated from the first stomach to the mouth of a ruminant and chewed \n... to the mouth again for further chewing by ruminants, such as cattle and sheep. [SEP] [SEP] Cud-chewing - definition of Cud-chewing by The Free Dictionary [SEP] Food regurgitated from the first stomach to "
}
],
"title": ""
},
{
"paragraphs": [
{
"qas": [
{
"question": "In geologic time one of these, shorter than an eon, is divided into periods & subdivided into epochs",
"id": "b5da96ef279448aca61b049042467a4a",
"answers": [
{
"text": "Era",
"answer_start": 2090
}
]
}
],
"context": "[SEP] [SEP] About the geologic time scale [SEP] The geologic history of the Earth is broken up into hierarchical chunks of time. \nFrom largest to smallest, this hierarchy includes eons, eras, periods, epochs, and \nages. All of these are displayed in the portion of the geologic time scale shown \nbelow. ... The Phanerozoic is subdivided into three major divisions: the Cenozoic,\n... [SEP] [SEP] Geologic time scale - Wikipedia [SEP] The geological time scale (GTS) is a system of chronological dating that relates \ngeological ... Eons are divided into eras, which are in turn divided into periods, \nepochs and ... of rock that correspond to these periods of geologic time in Earth's \nhistory. ... series that is then subdivided into zones based on succession of \ntrilobites. [SEP] [SEP] What is a eon, a era, a period and a epoch? | Yahoo Answers [SEP] Feb 6, 2010 ... Epoch: A subdivision of geologic time that is longer than an age but shorter than \na period. The Tertiary Period is divided into five epochs:... [SEP] [SEP] Geologic Time Scale: Major Eons, Eras, Periods and Epochs - Video ... [SEP] The geologic time scale is an essential tool for understanding the history of Earth \nand the evolution of life. ... They break up geologic time into larger and smaller \nchunks, so that major events are ... The Cenozoic era is the one we are in today. \n... Keep in mind that these three eras are all grouped within the Phanerozoic eon. [SEP] [SEP] Geologic Time Scale - Geological Time Line - Geology.com [SEP] Geologists have divided Earth's history into a series of time intervals. ... you can \nsee the Phanerozoic Eon is the most recent eon and began more than 500 \nmillion years ago. Eras. Eons are divided into smaller time intervals known as \neras. ... possible and the periods of the Cenozoic are frequently subdivided into \nepochs. [SEP] [SEP] The Geologic Time Scale Flashcards | Quizlet [SEP] One of the largest units of geologic time is the era. There are four ... Divided into 2 \nperiods: Tertiary and Quaternary, which are then subdivided into epochs. [SEP] [SEP] Time Words: Era, Epoch, and Eon - Daily Writing Tips [SEP] An epoch is longer than an era and can cover more than one lifetime. It is marked \n... into eras. A geological era is subdivided into periods, epochs, and stages. [SEP] [SEP] cosscience1 / Lesson 9-02 Geologic Time Scale [SEP] 1. c. Students know the evidence from geological studies of Earth and other \nplanets suggests that ... Taken together, these time spans make up the geologic \ntime scale. ... Finally, periods are divided into still smaller units called epochs. \nThe eon that began about 540 million years ago is the Phanerozoic, a term \nderived from... [SEP] [SEP] Appendix - ClassZone [SEP] The geologic time scale is divided into eons, eras, periods, epochs. (ehp-uhks) ... \nfossil record of this eon, it is further divided into smaller units of time called eras... [SEP] [SEP] Geologic time scale - New World Encyclopedia [SEP] Apr 21, 2015 ... The geologic time scale is used by geologists and other scientists to map ... is the \neon, which is further divided successively into eras, periods, epochs, and stages. \n... are far more recognized faunal stages than defined geologic time units. ... For \nover one hundred years, the age of the Earth and of the rock... [SEP] [SEP] Phanerozoic - New World Encyclopedia [SEP] Apr 11, 2016 ... The Phanerozoic eon is the interval of geologic time spaning from the ... The \nPhanerozoic eon is divided into three eras: The Paleozoic, Mesozoic, and \nCenozoic. ... The naming of periods and epochs in the Cenozoic era is most \nformally ... One of these alternate periods, the Quaternary, comprises the... [SEP] [SEP] Geologic time scale - Topics [SEP] Eons are divided into eras, which are in turn divided into periods, epochs and \nages. ... layers of rock that correspond to these periods of geologic time in Earth's \nhistory. ... series that is then subdivided into zones based on succession of \n"
}
],
"title": ""
},
{
"paragraphs": [
{
"qas": [
{
"question": "For the last 8 years of his life, Galileo was under house arrest for espousing this man's theory",
"id": "cc15c92b096a4be898611ed8d3869b4e",
"answers": [
{
"text": "Copernicus",
"answer_start": 1233
}
]
}
],
"context": "[SEP] [SEP] Jeopardy Database Mind your P's and Q's [SEP] Question : chr \"For the last 8 years of his life, Galileo was under house arrest for \nespousing this man's theory\" \"No. 2: 1912 Olympian; football star at Carlisle... [SEP] [SEP] Assignment 13 [SEP] Nov 30, 2014 ... 3-LETTER WORDS $200 ## Question ## 1 For the last 8 years of his life, Galileo \nwas under house arrest for espousing this man's theory ## 2... [SEP] [SEP] RPubs - Jeopardy! [SEP] Jan 2, 2015 ... Question : chr \"For the last 8 years of his life, Galileo was under house arrest for \nespousing this man's theory\" \"No. 2: 1912 Olympian; football... [SEP] [SEP] Galileo affair - Wikipedia [SEP] The Galileo affair was a sequence of events, beginning around 1610, culminating \nwith the trial ... In 1632 Galileo, now an old man, published his Dialogue \nConcerning the Two Chief ... Galileo was kept under house arrest until his death \nin 1642. .... Heliocentrism, the theory that the Earth was a planet, which along \nwith all the... [SEP] [SEP] I BROKED IT by outofculture Pull Request #1 petekinnecom ... [SEP] + \"question\": \"'For the last 8 years of his life, Galileo was under house arrest for \nespousing this man's theory'\",. + \"value\": \"$200\",. + \"answer\": \"Copernicus\". + },. [SEP] [SEP] Did Galileo get in trouble for being right, or for being a jerk ... - io9 [SEP] Sep 15, 2011 ... Galileo was facing some stiff odds when he published his Dialogue Concerning \nthe ... in Galileo spending the last years of his life under house arrest - is ... and \nfair-minded, formally granted Galileo to write about the theory. ... while the one \nthat espoused the Aristotelian geocentric view of the ..... 8; 23; 17.6K. [SEP] [SEP] Web Navigation [SEP] For the last 8 years of his life, Galileo was under house arrest for espousing this \nman's theory. In the winter of 1971-72, a record 1,122 inches of snow fell at... [SEP] [SEP] Costco by river cree hours [SEP] Jun 17, 2013 HISTORY For the last 8 years of his life, Galileo was under house \narrest for espousing this man's theory Copernicus ESPN's TOP 10 ALL-TIME... [SEP] [SEP] S&w fowl farm&w fowl farm [SEP] Jun 17, 2013 HISTORY For the last 8 years of his life, Galileo was under house \narrest for espousing this man's theory Copernicus ESPN's TOP 10 ALL-TIME... [SEP] [SEP] Jeremy chandler fowls roundheads, sweaters, hatch [SEP] Jun 17, 2013 HISTORY For the last 8 years of his life, Galileo was under house \narrest for espousing this man's theory Copernicus ESPN's TOP 10 ALL-TIME... [SEP] [SEP] Lebowski sweater replica [SEP] Jun 17, 2013 HISTORY For the last 8 years of his life, Galileo was under house \narrest for espousing this man's theory Copernicus ESPN's TOP 10 ALL-TIME... [SEP] [SEP] The Galileo Controversy | Strange Notions [SEP] May 17, 2013 ... Ten years prior to Galileo, Johannes Kepler published a heliocentric work that \nexpanded on Copernicus' work. ... Copernicus refrained from publishing his \nheliocentric theory for .... It is a straw man argument to represent the Catholic \nChurch as ...... He was put under house arrest for the remainder of his life. [SEP] [SEP] Used ventriloquist dolls alberta [SEP] Jun 17, 2013 HISTORY For the last 8 years of his life, Galileo was under house \narrest for espousing this man's theory Copernicus ESPN's TOP 10 ALL-TIME... [SEP] [SEP] Recipe for tickles peppermint moonshine [SEP] Results 1 - 10 of 3440 ... Jun 17, 2013 HISTORY For the last 8 years of his life, Galileo was under house \narrest for espousing this man's theory Copernicus ESPN's... [SEP] [SEP] Images knickerless women [SEP] Jun 17, 2013 HISTORY For the last 8 years of his life, Galileo was under house \narrest for espousing this man's theory Copernicus ESPN's TOP "
}
],
"title": ""
},
{
"paragraphs": [
{
"qas": [
{
"question": "This company's Accutron watch, introduced in 1960, had a guarantee of accuracy to within one minute a month",
"id": "43ee9fcf44a148348923d4d3f22d7d98",
"answers": [
{
"text": "Bulova",
"answer_start": 795
}
]
}
],
"context": "[SEP] [SEP] History of Bulova Corporation FundingUniverse [SEP] 1923: The company changes its name to the Bulova Watch Company. ... a \npopular line originally introduced in the early 1960s, and Sportstime watches. ... \nLoews Corporation, which has holdings in the hotel, tobacco, and insurance ... \nwas the first to be sold with a written guarantee of accuracy to within one minute a \nmonth. [SEP] [SEP] Bulova Accutron History - The Accutron Place [SEP] In 1950 Swiss engineer Max Hetzel joined the Bulova Watch Company in Biel, ... \nIn March 1952 watchmakers Elgin and Lip, introduced electric watches. ... watch \nthat will be guaranteed accurate to within 2 seconds a day, or 1 minute a month. \n... These transistors and a tuning fork frequency filter which he had previously... [SEP] [SEP] Bulova - Wikipedia [SEP] Bulova is a manufacturer of watches and clocks. Its headquarters is located in \nNew York City. ... It was reincorporated under the name Bulova Watch Company \nin 1923, and ... the Accutron was guaranteed to be accurate to a minute per \nmonth, or 2 seconds per day, considerably better than mechanical watches of the \ntime. [SEP] [SEP] Web Navigation [SEP] This company has revolutionized mobile phone industry. This is the farthest \nplanet from the ... Ranger Station in this state. This company's Accutron watch, \nintroduced in 1960, had a guarantee of accuracy to within one minute a month. ... \nOur retrieval method, instead, has only one: Navigation. We built a neural net \nbased... [SEP] [SEP] A Guide to Buying and Collecting Accutron Watches | eBay [SEP] This is a guide for buying and collecting Bulova Accutron watches - the world's ... \nCreated in the 1960's and 1970's, they are now highly prized world-wide by \nwatch ... was guaranteed accurate to within one second a day or one minute a \nmonth, ... Accutron introduced several models of desk clocks using the 214 \nmovement. [SEP] [SEP] Bulova Watch History | WorldofWatches [SEP] Founded by Joseph Bulova in 1875, the company started its operations in lower \n... that was guaranteed to be accurate within 2 seconds a day or one minute a \nmonth. ... Later in the 1960's, Bulova introduced the Spaceview watch featuring a \n... Bulova brand featuring precise Japanese movements, Bulova also has the... [SEP] [SEP] Accutron History - Old Father Time [SEP] (Reprinted from the Bulova Watch Company's 1960 Publication of this title) ... \nInstead, the ACCUTRON mechanism has a tuning fork as the timekeeping \nelement. ... Bulova Watch Company to give a written guarantee of specific \naccuracy with each ... one minute a month in normal use as a wrist timepiece, \nand is guaranteed... [SEP] [SEP] Bulova: state of the art | Cell Phone Watch & Mobile Watch Phones [SEP] Feb 2, 2016 ... Bulova's Accutron watch has retained its popularity since its introduction in Nov \n1960. ... product collection that's as popular today as when it was first introduced. \nAccutron, unveiled in November, 1960, truly represented one of the ... an \nunprecedented guarantee of accuracy to within one minute a month. [SEP] [SEP] Bulova Accutron: the tuning fork revolution | Watchonista [SEP] May 29, 2015 ... In 1950, long before the arrival of quartz watches, whose precision ... Bulova \ncreated the first tuning fork wristwatch in the 1960s, yet the ... Its mechanical \ncaliber provided uneven precision: two seconds per day or one minute per month\n. ... The caliber 218, introduced in 1965, was thinner than the 214 (4.4... [SEP] [SEP] When I started, I made up my mind that quality would be ... - Swisstime [SEP] to the introduction of the world's first fully electronic watch, Accutron, in 1960, to \nthe company's newest achievement, Precisionist, which is poised to ... of \ntimekeeping technology, while also improving the accuracy of ..... With \nguaranteed accuracy to one ... within one minute a month, the first such promise \never offered by a. [SEP] [SEP] A Guide to Buying and Collecting Accutron Watches | eBay [SEP] This is a guide for buying and collecting Bulova Accutron watches - the world's ... \nCreated in the 1960"
}
],
"title": ""
},
{
"paragraphs": [
{
"qas": [
{
"question": "This \"Modern Girl\" first hit the Billboard Top 10 with \"Morning Train (Nine To Five)\"",
"id": "8824fe46b8b442699c7b8ef0b25ef995",
"answers": [
{
"text": "Sheena \nEaston",
"answer_start": 1533
}
]
}
],
"context": "[SEP] [SEP] 9 to 5 (Sheena Easton song) - Wikipedia [SEP] \"Modern Girl\" (1980), \"9 to 5 (Morning Train)\" (1980), \"One Man Woman\" (1980). \"\n9 to 5\" or \"Morning Train\" is the title of a popular song written by British songwriter \nFlorrie Palmer and recorded by Sheena Easton in 1980, becoming her biggest hit\n. ... The title of the song was changed to \"Morning Train (Nine to Five)\" to avoid... [SEP] [SEP] Sheena Easton - Wikipedia [SEP] Easton became the first and only artist in history to have a Top 5 hit on five \ndifferent Billboard charts consecutively, with Morning Train (9 to 5) (Pop & Adult... [SEP] [SEP] Morning Train (Nine To Five) by Sheena Easton Songfacts [SEP] Morning Train (Nine To Five) by Sheena Easton song meaning, lyric ... To \ncapitalize on the sudden interest in Easton, EMI re-released \"Modern Girl,\" ... At \nthe time the song was at #14 on Billboard's Hot Top 100 chart; and on ... It was \nthe fifth of her eight Top 10 records; her first Top 10 hit was \"Morning Train {Nine \nto Five)\". [SEP] [SEP] 9 to 5 (Sheena Easton song) - Revolvy [SEP] \"9 to 5\" became a top three hit and was one of the best-selling singles of the year. \n... The title of the song was changed to \"Morning Train (Nine to Five)\" to avoid ... \"\nModern Girl\" is the debut single by Scottish pop singer Sheena Easton . ..... chart, \nand reaching number three on the Billboard Hot 100 and number 10 on the UK... [SEP] [SEP] Sheena Easton - Chart history | Billboard [SEP] The Hot 100. Adult Contemporary ... Morning Train (Nine To Five). Sheena \nEaston ... Modern Girl ... The Greatest Hits 2: 1991, Vol. 2 ... March 10, 1984. 25\nPeak... [SEP] [SEP] Chart Beat Chat | Billboard [SEP] Mar 21, 2008 ... The Beatles Earn 32nd Top 10 Album on Billboard 200 Chart With &#039; ... \ncame first, \"Morning Train (Nine to Five)\" or \"Modern Girl,\" depends on where ... In \nthe U.K., \"Modern Girl\" was the Glaswegian singer's first single. [SEP] [SEP] US Top 40 Singles For The Week Ending May 9, 1981 | Weekly Top 40 [SEP] May 9, 1981 ... TW LW TITLE Artist (Label)-Weeks on Chart (Peak Position) 1 1 MORNING \nTRAIN (Nine To Five) ---- Sheena Easton (EMI-America)-13 (2... [SEP] [SEP] Top 100 Songs of 1981 - Billboard Year End Charts [SEP] View a list of the top 100 hit songs in the US in 1981 and listen... [SEP] [SEP] Radionomy | Listen to Sheena Easton radio stations for free [SEP] \"Modern Girl\" re-entered the chart subsequently and climbed into the top 10, ... \nalthough it was renamed \"Morning Train (Nine To Five)\" for its release in the US \n... Easton's first and only #1 hit in the US and topped both the Billboard Hot 100... [SEP] [SEP] Top 100 Hits of 1981/Top 100 Songs of 1981 - Music Outfitters, Inc. [SEP] Top 100 songs for the year 1981 from the Billboard Year-End Hot 100 charts. ... \n10. Keep On Loving You, REO Speedwagon 11. Theme from \"Greatest American \n... Morning Train (Nine to Five), Sheena Easton ... Modern Girl, Sheena Easton [SEP] [SEP] Factacular : Brainoff Results [SEP] \"Modern Girl\" re-entered the chart subsequently and climbed into the top 10, and \n... although it was renamed \"Morning Train (Nine To Five)\" for its release in the \nU.S. ... became Easton's first #1 hit in the U.S. and topped both the Billboard Hot... [SEP] [SEP] Sheena Easton | Miami Vice Wiki | Fandom powered by Wikia [SEP] ... #1 on the Billboard Hot 100 (with the title changed to \"Morning Train (Nine to \nFive)\" ... \"Modern Girl\" then charted in "
}
],
"title": ""
},
{
"paragraphs": [
{
"qas": [
{
"question": "Prime Minister Tony Blair dubbed her \"The People's Princess\"",
"id": "e04ef70e4d034c2e9d56dc96d662d9cb",
"answers": [
{
"text": "Princess Diana",
"answer_start": 776
}
]
}
],
"context": "[SEP] [SEP] Tony Blair's 'people's princess' speech honoured - Telegraph [SEP] Nov 16, 2013 ... A plaque has been placed on the spot where Tony Blair addressed the nation ... \nMr Blair, still fresh in his job as Prime Minister, addressed the nation from ... She \nwas the people's princess and that's how she will stay, how she ... After her death, \nMrs Burton was described by Mr Blair as one of the kindest... [SEP] [SEP] Tony Blair: Diana was a manipulator like me - Telegraph [SEP] Aug 31, 2010 ... When Diana, the 'People's Princess' died, Tony Blair felt a duty to 'protect the ... \nthat her chances of survival were poor, and was told of her death at 4am. .... Are \nwe really supposed to believe that the Prime Minister can't find a... [SEP] [SEP] Tony Blair's shock at Princess Diana's death revealed in ... - Mirror [SEP] Jan 7, 2016 ... The former Prime Minister said the \"star\" Princess \"was liked by ordinary ... Mr \nBlair, who would later dub her the People's Princess, said \"the... [SEP] [SEP] Tony Blair's - BBC [SEP] The Prime Minister, Tony Blair, paid a moving tribute to the life and work of ... \"Our \nthoughts and prayers are with Princess Diana's family, particularly her two sons. \n... \"She was the People's Princess and that is how she will stay, how she will... [SEP] [SEP] On day of her 50th, fans gather to remember Diana [SEP] Jul 1, 2011 ... On day of her 50th, fans gather to remember Diana ... and the woman then-Prime \nMinister Tony Blair dubbed \"the People's Princess\" still retains... [SEP] [SEP] Diana's Death: Read Bill Clinton and Tony Blair's Initial Reactions ... [SEP] Jan 8, 2016 ... ... then-US President Bill Clinton and Prime Minister Tony Blair after ... Bill Clinton \nand Tony Blair Talk About Diana's Death ... According to People, Blair, who \ndubbed Diana the People's Princess in the days following her... [SEP] [SEP] Tony Blair's book: Diana, Alastair Campbell, Afghanistan and ... [SEP] Aug 31, 2010 ... Preview of Tony Blair's memoir A Journey, including a surreal visit to Balmoral ... \na nation on to the streets and provided the Labour prime minister with one of his \n... constituency church and dubbed her \"the people's princess\". [SEP] [SEP] Queen's image on the rise again - latimes [SEP] Feb 4, 2007 ... Newly elected Prime Minister Tony Blair was the man with his finger on the \npublic's trembling pulse, dubbing Diana \"the people's princess\" and ... while the \nqueen remained locked in chilly isolation in her Scottish castle. [SEP] [SEP] LABORING FOR THE MONARCHY - The Washington Post [SEP] Sep 12, 1997 ... British Prime Minister Tony Blair is routinely described as a moderate, and he is \n... When Blair dubbed her the \"People's Princess,\" he implicitly... [SEP] [SEP] Princes William and Harry reveal plans for 20th anniversary of ... [SEP] Aug 18, 2016 ... ... dubbed the People's Princess by then-Prime Minister Tony Blair had ... Ten \nyears after her death, William and Harry commemorated their... [SEP] [SEP] The 18th anniversary of Princess Diana's death - Daily Express [SEP] Aug 31, 2015 ... ... in the world, she was just 36 years old at the time of her tragic death. ... prime \nminister Tony Blair dubbed Diana \"the People's Princess\". [SEP] [SEP] Princess Diana: The Rise of the Unforgettable Icon - AARP [SEP] She's dubbed Shy Di as she keeps her head down and dodges paparazzi while \n.... address, British Prime Minister Tony Blair dubs her the People's Princess. [SEP] [SEP] 4 Photos - | Deseret News [SEP] Jul 1, 2011 ... On day of her 50th birthday, fans gather to remember Princess Diana (4 ... then-\nPrime Minister Tony Blair dubbed \"the People's Princess\" still... [SEP] [SEP] The young princess was subject to worldwide media speculation ... [SEP] Aug 31, 2015 ... ... princess was subject to worldwide media speculation prior to her "
}
],
"title": ""
},
{
"paragraphs": [
{
"qas": [
{
"question": "Once Tommy Mullaney on \"L.A. Law\", John Spencer now plays White House chief of staff Leo McGarry on this series",
"id": "b354d2f970eb457bb5a305d94ba1a777",
"answers": [
{
"text": "The West Wing",
"answer_start": 422
}
]
}
],
"context": "[SEP] [SEP] John Spencer (actor) - Wikipedia [SEP] John Spencer (December 20, 1946 December 16, 2005) was an American \nactor. He won an Emmy Award in 2002 for his role as White House Chief of Staff \nLeo McGarry on the NBC political drama series ... The same year, Spencer joined \nthe cast of the television series L.A. Law, playing rumpled, pugnacious, street-\nwise... [SEP] [SEP] John Spencer, 58, TV Actor Starring on 'The West Wing,' Dies - The ... [SEP] Dec 17, 2005 ... John Spencer, Emmy Award-winning actor who is best known for his role on ... \nSubscribe Now ... actor who played the shrewd, craggy White House chief of staff \non the ... In an eerie parallel to life, his character on \"The West Wing,\" Leo \nMcGarry, ... the fiery New York transplant Tommy Mullaney on \"L.A. Law. [SEP] [SEP] Actor John Spencer has died - TODAY.com [SEP] Dec 18, 2005 ... Spencer died after being admitted to a Los Angeles hospital during the night, ... \nSpencer played Leo McGarry, the savvy and powerful chief of staff to ... suffered a \nheart attack that forced him to give up his White House job. ... Spencer, who also \nstarred on L.A. Law as attorney Tommy Mullaney, received an... [SEP] [SEP] Spencer's loss leaves huge hole in 'Wing' - today > entertainment ... [SEP] Dec 27, 2005 ... Actor John Spencer (right) had a pivotal role on \"West Wing\" this season, ... \nFriday at 58, as the perfect man to play rumpled attorney Tommy Mullaney. ... \nMcGarry, President Josiah Bartlet's right-hand man, the chief of staff who ... In his \nmost touching episode from season one, a young White House staffer... [SEP] [SEP] John Spencer Of 'West Wing' Dies - CBS News [SEP] Dec 16, 2005 ... Spencer played Leo McGarry, the savvy and powerful chief of staff to President ... \nsuffered a heart attack that forced him to give up his White House job. ... who \ncreated the series, and Tommy Schlamme, one of the original ... Spencer, who \nalso starred on \"L.A. Law\" as attorney Tommy Mullaney, received an... [SEP] [SEP] Obituary: John Spencer | Media | The Guardian [SEP] Dec 19, 2005 ... home; media ... John Spencer, the actor who shot to fame playing McGarry, has \ndied of a ... series LA Law for its four final seasons, playing Tommy Mullaney, ... \nfoil for the younger actors who made up the presidential staff. ... Once married \nand divorced in the 1970s, Spencer leaves no immediate survivors. [SEP] [SEP] West Wing News Blog: Remembering John Spencer [SEP] Jan 20, 2006 ... Once again: Leo McGarry has been pronounced dead...\" .... Spencer took the \nstage with the real White House Chief of Staff .... Spencer soon began a stint that \nwould last until 1994 as feisty attorney Tommy Mullaney on NBC's \"L.A. Law. ..... \nActor John Spencer of The West Wing has died today of a heart... [SEP] [SEP] BBC NEWS | Entertainment | West Wing's Leo dies at age of 58 [SEP] Dec 17, 2005 ... John Spencer, the actor who played chief of staff Leo McGarry in NBC's The West \n... a heart attack that forces him to give up his White House job as chief of staff. ... \nSpencer, who also starred in LA Law as attorney Tommy Mullaney, ... If they do \nmake another series of the West Wing now, it won't be the same. [SEP] [SEP] 'West Wing' actor John Spencer dies - usa today [SEP] Dec 16, 2005 ... Home & Garden ... John Spencer portrayed Leo McGarry, the president's chief of \nstaff on The ... Jersey, also played streetwise attorney Tommy Mullaney on NBC's \nL.A. Law. ... Television, which produces the series, now in its seventh season. ... \nHe was \"one of those rare combinations of divinely gifted and... [SEP] [SEP] John Spencer Obituary | John Spencer Funeral | Legacy.com [SEP] Spencer played Leo McGarry, the chief of staff to President Jeb "
}
],
"title": ""
},
{
"paragraphs": [
{
"qas": [
{
"question": "(Hi, I'm Wallace Langham) I played Don Kirshner in VH1's TV movie about this quartet who sang \"Daydream Believer\"",
"id": "40b52c7c4f6b46ecb65a0541a69e92a6",
"answers": [
{
"text": "The Monkees",
"answer_start": 32
}
]
}
],
"context": "[SEP] [SEP] Daydream Believers: The Monkees' Story (TV Movie 2000) - IMDb [SEP] Biography ... Hey, Hey We're the Monkees (TV Movie 1997) .... Don Kirshner ... \nThese did not enter service until 1970, years after the Monkees' first Hawaii \nconcert. ... Davy Jones: For your information, I'm rather tall for horse racing. ... \nrecreation of the famous 'Daydream Believer' promo, and the opening montage of \nthe TV... [SEP] [SEP] Hey, Hey These Are The Monkees? - tribunedigital-orlandosentinel [SEP] Jun 28, 2000 ... TELEVISION REVIEW. There's Simply Not Enough Story To Sustain Vh1's Biopic \nAbout The '60s Rock ... Yet the movie bungles the dramatic payoff when the \nquartet is ... weak - hardly reason to believe in a group that sang ``I'm a Believer.'' \n... Record producer Don Kirshner (Wallace Langham of Veronica's... [SEP] [SEP] Daydream Believers: The Monkees Story - Monkeesrule43 Online [SEP] Originally Aired: VH1 Television Network, June 28, 2000 ... At these auditions, a \nmajor part of the story takes place when Don Kirshner is hired ... takes place in \nHonolulu, Hawaii, where the band plays \"I Wanna Be Free. ... Peter is helping \nDavy during the recording sessions for \"Daydream Believer. .... ---Wallace \nLangham. [SEP] [SEP] Daydream Believers (Davy Jones commentary) Script | Sunshine ... [SEP] You know this uh, this TV show that Micky was in, the Circus Boy thing. It, it really \n.... Don Kirshner (Wallace Langham), Micky Dolenz (Aaron Lohr). (00:16:38)... [SEP] [SEP] Monkees in Philadelphia! Monkees on NBC's Tonight Show! | The ... [SEP] Apr 3, 2011 ... KTLA-TV's morning news show in Los Angeles. .... hitmaker Don Kirshner (\nWallace Langham) teams with TV producer Van Foreman (Colin [SEP] [SEP] Site map | The Ultimate Rock and Pop Music History Website ... [SEP] (Baby) You Don't Have to Tell Me (1) Syndicate content (I Can't Get No) \nSatisfaction (1) Syndicate content (I Don't Know Why) But I Do (1) Syndicate \ncontent... [SEP] [SEP] MOVIES FOR TRADE ONLY - INCLUDES MANY RARITIES - iOffer [SEP] THE BEATNIKS - 1959 flick with Peter Breck and '50s singer Tony Travis ... \nFerguson with Mary McCormack, Frances Fisher, David Rasche, Chris Langham. \n..... Daydream Believer: Story of the Monkees -- VH1 TV movie, two hours, Ex ..... \nHigh Tide - 1988 - bittersweet comedy/drama - Judy Davis plays a backup \nsinger... [SEP] [SEP] Issues 1061 to 1160 - DFWRetroplex.com [SEP] What gigantic monster frightens TV personalities about what might happen to \nthem if ...... middle of a high school gym with dancing teens surrounding you, don'\nt mix. ...... that Dave Brubeck Quartet drummer Joe Morello enjoys playing \nbackstage. ...... Clarksville, I'm a Believer, Pleasant Valley Sunday and Daydream \nBeliever. [SEP] [SEP] Bernard Fisher Gunpowder Incident Moishezon manifold ... - Tool Labs [SEP] Mar 5, 2016 ... ... Jacques Bogopolsky Methimazole Beth Wallace Moist desquamation List of ... \nTaylorsville High School Ghajini (2005 film) Granger High School (Utah) ... Acua \nJohn F. Kennedy Memorial Airport Giorgia (singer) Thomas Point ...... Thompson \nSimon (TV show sketch) The Dark Side with Nat X I'm Chillin'... "
}
],
"title": ""
},
{
"paragraphs": [
{
"qas": [
{
"question": "Signer of the Dec. of Indep., framer of the Constitution of Mass., second President of the United States",
"id": "10abdca0abb04882a56e94745a587f28",
"answers": [
{
"text": "John Adams",
"answer_start": 2927
}
]
}
],
"context": "[SEP] [SEP] Declaration of Independence, US Constitution, Constitution Day ... [SEP] Connecticut Delaware Georgia Maryland Massachusetts New .... Button \nGwinnett was the second signer of the Declaration to die as the result ... was the \nfirst Vice-President of the United States and the second President. .... He was one \nof the framers of the Constitution and was known as the Sage of the Convention. [SEP] [SEP] Founding Fathers of the United States - Wikipedia [SEP] The Founding Fathers of the United States are the individuals of the Thirteen \nBritish Colonies in ... The second Congress adopted the Declaration of \nIndependence. ... The result of the Convention was the United States Constitution\n. .... For their era, the 1787 delegates (like the 1776 signers) were average in \nterms of life... [SEP] [SEP] James Wilson - Wikipedia [SEP] James Wilson (September 14, 1742 August 21, 1798) was one of the Founding \nFathers of the United States and a signatory of the United States Declaration of \nIndependence. .... A fellow delegate in the Constitutional Convention of 1787 in \nPhiladelphia made ..... Signers of the United States Declaration of Independence. [SEP] [SEP] Gouverneur Morris - Wikipedia [SEP] Gouverneur Morris I (January 31, 1752 November 6, 1816) was an American \nstatesman, a Founding Father of the United States, and a native of New York City \nwho represented Pennsylvania in the Constitutional Convention of 1787. ... \nauthor of large sections of the Constitution of the United States and one of its \nsigners. [SEP] [SEP] John Dickinson - Wikipedia [SEP] John Dickinson a Founding Father of the United States, was a solicitor and \npolitician from Philadelphia, Pennsylvania and Wilmington, Delaware known as \nthe \"Penman of the Revolution\" for his twelve Letters from a Farmer in \nPennsylvania, published individually in 1767 and 1768. .... Following the \nDeclaration of Independence, Dickinson was given the rank of... [SEP] [SEP] WallBuilders - Issues and Articles - The Founding Fathers on Jesus ... [SEP] SIGNER OF THE DECLARATION OF INDEPENDENCE; FATHER OF THE ... He \nalso called on the State of Massachusetts to pray that . ... TO THE U. S. \nSUPREME COURT BAR; FRAMER OF THE BILL OF RIGHTS; DIRECTOR OF \nTHE U. S. MINT .... religion; and second, the support of the Constitution of the \nUnited States. [SEP] [SEP] Declaration of Independence facts, information, pictures ... [SEP] Get information, facts, and pictures about Declaration of Independence at ... This \ndocument, which the Second Continental Congress adopted on 4 July 1776, ... \nbecause the British government included two grave \"constitutional errors,\" ... \nthese United Colonies are, and of right ought to be, free and independent States, \nthat... [SEP] [SEP] Find A Grave - Signers of The US Declaration of Independence [SEP] Records 21 - 40 ... Declaration of Independence Signer, Massachusetts Governor. The cousin to \nJohn Adams, second President of the United States, he was a ... Born in \nElizabethtown, New Jersey, he was the son of a farmer, and grew up with an \naffinity ... Declaration of Independence Signer, United States Constitution Signer. [SEP] [SEP] America's Founding Fathers - Delegates to the Constitutional ... [SEP] Gerry was born in 1744 at Marblehead, MA, the third of 12 children. ... In 1797 \nPresident John Adams appointed him as the only non-Federalist ... He left his \nwife, who was to live until 1849, the last surviving widow of a signer of the \nDeclaration of Independence, ... He was the eldest son of a prosperous farmer-\nmerchant. [SEP] [SEP] America's Founding Documents | National Archives [SEP] Oct 12, 2016 ... Constitution of the United States ... The Declaration of Independence, \nConstitution and Bill of Rights, collectively known as the Charters of... [SEP] [SEP] Founding Fathers and Slavery - Were all of America's Founding ... [SEP] Discussion on the stance of America's founding fathers regarding ethnic race. ... \nthe Clarence Thomas hearings, the framers' ideas about natural law must be ... \npart on the efforts of these Founders, Pennsylvania and Massachusetts abolished \n... John Adams, Signer of the Declaration of Independence and U.S. President. [SEP] "
}
],
"title": ""
},
{
"paragraphs": [
{
"qas": [
{
"question": "One edition calls this Darwin opus one of \"the most readable and approachable\" of revolutionary scientific works",
"id": "4a360242cbd7421aab8ab86a30b9766b",
"answers": [
{
"text": "The Origin of Species",
"answer_start": 372
}
]
}
],
"context": "[SEP] [SEP] Free Flashcards about GENERAL SCIENCE - StudyStack [SEP] ONE EDITION CALLS THIS DARWIN OPUS ONE OF THE \"MOST READABLE \nAND APPROACHABLE\" OF REVOLUTIONARY SCIENTIFIC WORKS, THE... [SEP] [SEP] Jeopardy! #1 Flashcards | Quizlet [SEP] FOREWORDS 400 One edition calls this Darwin opus one of \"the most readable \nand approachable\" of revolutionary scientific works. The Origin of Species. "
}
],
"title": ""
},
{
"paragraphs": [
{
"qas": [
{
"question": "This Asian political party was founded in 1885 with \"Indian National\" as part of its name",
"id": "670dc24a0fb04455bbb889733bf002a1",
"answers": [
{
"text": "Congress \nParty",
"answer_start": 3141
}
]
}
],
"context": "[SEP] [SEP] Indian National Congress - Wikipedia [SEP] The Indian National Congress (INC, often called Congress), is one of two major \npolitical parties in India; the other being the Bharatiya Janata Party. Congress \nwas founded in 1885 during the British Raj; its founders include Allan Octavian \nHume (a prominent member ...... parties in Asia Political parties in India \nPolitical parties established in 1885... [SEP] [SEP] History of the Indian National Congress - Wikipedia [SEP] From its foundation on 28 December 1885 by 65 individuals with the active help \nby A.O Hume, .... A whole class of political leaders disagreed with Gandhi. ... of \nthe Indian National Congress, the battle of the party's soul was won, and a new ... \nArmy in South-east Asia during the Second World War, he invoked Gandhi's \nname... [SEP] [SEP] Indian National Association - Wikipedia [SEP] The Indian National Association also known as Indian Association was the first \navowed ... The Association attracted educated Indians and civic leaders from all \nparts of the country, ... Its origins are from the Zamindari Sabha (Association) \nfounded by ... Prarthana Samaj) to form in 1885 the Indian National Congress \nwhich has... [SEP] [SEP] Indian National Congress - Infoplease [SEP] Indian National Congress, Indian political party, founded in 1885. Its founding \nmembers proposed economic reforms and wanted a larger role in the. ... See \nmore Encyclopedia articles on: South Asian History ... Infoplease, Part of FEN \nLearning. [SEP] [SEP] Indian National Congress | Making Britain - The Open University [SEP] Other names: ... In its first twenty years, known as a 'moderate phase', Congress \nwas not ... for independence or self-rule but for greater political autonomy within \nempire. ... in London, acting as a lobby group in Britain, which was founded in \n1889. ... Kaushik, Harish P., The Indian National Congress in England (1885-\n1920)... [SEP] [SEP] Indian National Congress facts, information, pictures | Encyclopedia ... [SEP] Founded in 1885, the Indian National Congress (INC) was at the forefront of the \n... but Mohandas K. Gandhi, who assumed its leadership in 1920 and remained \nits ... After independence the Congress, hitherto an all-embracing national \nmovement, was transformed into a political party. ... Political Parties in South Asia\n. [SEP] [SEP] Political parties - India - system, power, policy [SEP] Encyclopedia of the Nations Asia and Oceania India ... Founded in 1885, the \nIndian National Congress, known after 1947 as the ... It became the ruling party of \na free India by reason of its national popularity and ... Congress in all but name, \nreflecting various populist, socialist, business, personal, and regional interests. [SEP] [SEP] Why was the Indian National Congress formed? | Reference.com [SEP] The Indian National Congress, or INC, was formed in 1885 to create an outlet for \nIndians ... Modern Asia ... which was created in 1876 and the first Indian political \norganization of its kind. ... What were the names of Mahatma Gandhi's four \nchildren? ... of National Congress of American Indians Indian National Congress \nParty... [SEP] [SEP] Indian Independence: Nationalism Source 1 - British Library [SEP] History of Indian National Congress. ... (Text from British Library exhibition notes: \nThe Indian National Congress 1885-1985) ... public opinion in India on political \nquestions and to unite Indians around a common political programme. ... The Bill, \nwhich took its name from Sir Courtenay Ilbert, the Law Member of the Viceroy's... [SEP] [SEP] Indian Association | political organization, India | Britannica.com [SEP] Nationalist political group in India that favoured local self-government and served \nas a ... in character, using expatriate Bengali communities as centres for its \nprovincial branches. ... After the Indian National Congress was founded in 1885, \nthe association gradually lost ... country that occupies the greater part of South \nAsia. [SEP] [SEP] Indian National Congress (INC) Party History, Symbol, Founders ... [SEP] Sep 24, 2015 ... Know about the history of Indian National Congress (INC) - its top leaders ... \nToday it is one "
}
],
"title": ""
},
{
"paragraphs": [
{
"qas": [
{
"question": "In the winter of 1971-72, a record 1,122 inches of snow fell at Rainier Paradise Ranger Station in this state",
"id": "9f294e3878034f48bd421ed0eb96f2a3",
"answers": [
{
"text": "Washington",
"answer_start": 733
}
]
}
],
"context": "[SEP] [SEP] National Seasonal Snowfall Record, Mt. Baker, WA [SEP] and heavy snow cantinued to fall, the ski area became increasingly ... and \naccepted record of 1,122 inches, set in 1971-72 at the. National weather ... tion at \nParadise Ranger Station on the southern slopes of ... tently low throughout the \nwinter. Therefore .... Totals for Mount Baker Ski Area and Mount Rainier - \nParadise. [SEP] [SEP] Articles: The Snows of Rainier - American Thinker [SEP] Apr 28, 2013 ... In early April 1999, Paradise Ranger Station on Mt. Rainier (+14,411) ... one year \nwith 1,140 inches (95 feet -- snowfall does not record snowpack, only the ... \nRanger Station itself when 1,122 inches of snow fell in the winter of 1971-72. ... \nThat Washington State snow is very wet and heavy also indicates that... [SEP] [SEP] (TB6RVMG) Travel Bug Dog Tag - Snow-Paradise Ranger Station TB [SEP] ... Sunday, May 3, 2015; Origin: Texas, United States; Recently Spotted: In \nWildcat Bluff ... Does one rank the record lows, lowest monthly averages or \nlowest winter averages? ... Paradise Ranger Station is in Mount Rainier National \nPark. ... 1,122 inches (93.5 ft) of snow fell during the winter of 1971/72, setting a \nworld record. [SEP] [SEP] Annual Snowfall Totals at Paradise, 1920 to 2011 - National Park ... [SEP] The Paradise area at Mount Rainier, located at an elevation of 5,400 feet, is \nknown for its snowfall. Paradise once held the world record for measured \nsnowfall in single year: 1,122 inches , or 93.5 feet (28.5 meters), of snow fell at. \nParadise over the winter of 1971-72. Snowfall is ... To include the full winter \nseason, snowfall is... [SEP] [SEP] Mount Rainier National Park - Wikipedia [SEP] Mount Rainier National Park is a United States National Park located in \nsoutheast Pierce .... Paradise is the most popular destination for visitors to Mount \nRainier National ... 1,122 inches (28.5 m) of snow fell during the winter of 1971/\n72, setting a .... visitor center (closed during the 2013 season), and ranger station \nlocated in... [SEP] [SEP] Mt Rainier.pdf [SEP] Jan 2, 2011 ... The park contains 368 square miles including all of Mount Rainier, ... States, \nwhile Emmons Glacier is the largest glacier by area. ... 1,122 inches (93.5 ft, 28.5 \nm) of snow fell during the winter of 1971/72, setting a world record for that year. ... \nOhanapecosh is a campground, visitor center, and ranger station... [SEP] [SEP] Web Navigation [SEP] In the winter of 1971-72, a record 1,122 inches of snow fell at Rainier Paradise \nRanger Station in this state. This company's Accutron watch, introduced in 1960,\n... [SEP] [SEP] The World's Deepest Snow (PHOTOS) | The Weather Channel [SEP] Feb 11, 2015 ... Mountain ranges intercepted by the winter storm track and northern locations ... \nAverage annual snowfall: 659 inches (Paradise Ranger Station at Mt. Rainier) - \n367 inch ... Prior to that, 1122 inches of snow fell at Paradise Ranger Station \nduring the .... World record snow depth: 465 inches at Mt. Ibuki in 1927 [SEP] [SEP] Mount Rainier National Park [SEP] Mount Rainier became the fifth American National Park on March 2, 1889. ... in \nnortheast Lewis County and southeast Pierce County in the state of Washington. \n... towards trails through the meadows that lead up to the base of Mount Rainier. \n... 1,122 inches of snow fell the winter of 1971-72, setting a world record that year. [SEP] [SEP] Fred the Preparedness Dog: Facts [SEP] In the winter of 1971-72, 93.5 feet (1,122 inches) of snow fell at the Rainier \nParadise Ranger Station in the state of Washington. .... scientists built large \nspring-pendulum seismometers in an attempt to record the long-period motion \nproduced by... [SEP] [SEP] Answers archive: Weather extremes - Richmond and Glen Allen ... [SEP] temperature only varied 9F from summer to winter, Fargo's varied 65F. ... You \ncan check out each state's low temperature record on this USA "
}
],
"title": ""
},
{
"paragraphs": [
{
"qas": [
{
"question": "In 2003 this airline agreed to buy KLM, creating Europe's largest airline",
"id": "beb0368980824527ac58e7f983bca97f",
"answers": [
{
"text": "Air France",
"answer_start": 22
}
]
}
],
"context": "[SEP] [SEP] CNN.com - Air France to buy rival KLM - Sep. 30, 2003 [SEP] Sep 30, 2003 ... Air France has agreed to buy rival Dutch airline KLM for 784 million euros ($900 \nmillion) in stock, creating the world's third-largest airline. ... The deal would be the \nfirst cross-border airline merger in Europe and paves the way... [SEP] [SEP] Air France to buy KLM, take top spot - tribunedigital-chicagotribune [SEP] Air France on Tuesday agreed to buy KLM Royal Dutch Airlines NV for $913 \nmillion in stock in Europe's largest airline takeover, creating what will be the \nworld's largest airline by revenue, ahead of. ... October 01, 2003|By Bloomberg \nNews. [SEP] [SEP] More airline mergers forecast - tribunedigital-baltimoresun [SEP] Air France bid for KLM comes amid U.S.-EU talks. October 01, 2003|By \nBLOOMBERG NEWS ... of KLM Royal Dutch Airlines NV, which would create \nEurope's largest airline, may spur more ... Air France, Europe's second-largest \nairline, has offered 784 million euros ($913 ... 3, may also seek to buy \ncompetitors, Doganis said. [SEP] [SEP] Air France and KLM to Merge, Europe's No. 1 Airline - The New York ... [SEP] Oct 1, 2003 ... KLM Royal Dutch Airlines and Air France merge, creating largest European ... Air \nFrance and KLM to Merge, Europe's No. 1 Airline. By JOHN TAGLIABUE OCT. 1, \n2003 ... Talks on a new Europe-wide agreement with the United States are to \nbegin .... The new shares issued to buy it would dilute the French... [SEP] [SEP] KLM - Wikipedia [SEP] KLM, legally Koninklijke Luchtvaart Maatschappij N.V. (Royal Dutch Airlines), is \nthe flag carrier ..... On 30 September 2003, Air France and KLM agreed to a \nmerger plan in ... The merger resulted in the world's largest airline group and \nshould have led to ... KLM received the award for \"Best Airline Staff Service\" in \nEurope at the... [SEP] [SEP] Air FranceKLM - Wikipedia [SEP] Air FranceKLM is a Franco-Dutch airline holding company incorporated under \nFrench law with ... Air FranceKLM was created by the mutually agreed merger \nbetween Air France and Netherlands-based KLM on 5 May 2004. As a result ... \nAir FranceKLM is one of the largest airline companies in Europe, with 204.7 \nbillion... [SEP] [SEP] Air France - Wikipedia [SEP] Air France stylized as AIRFRANCE, is the French flag carrier headquartered in \nTremblay-en-France, (north of Paris). It is a subsidiary of the Air FranceKLM \nGroup and a founding member of the ... In November 2004, Air France ranked as \nthe largest European airline with 25.5% total market share, and was the largest \nairline... [SEP] [SEP] Commission clears merger between Air France and KLM ... - Europa [SEP] Feb 11, 2004 ... On 18 December 2003, Air France and KLM notified a framework agreement ... \nAlthough the deal will create the largest airline group in Europe, the ... France \nalso agreed to enter into so-called intermodal agreements with land... [SEP] [SEP] Featured Articles about Klm - Page 3 - latimes [SEP] October 1, 2003 | From Associated Press. Air France and KLM Royal Dutch \nAirlines plan to create an aviation group that would surpass British Airways as \nEurope's largest airline operator. ... The newspaper also said that if American \nAirlines parent AMR Corp. manages to buy Northwest Airlines Corp., the four \nairlines could... [SEP] [SEP] Articles about Klm Royal Dutch Airlines - latimes [SEP] KLM Royal Dutch Airlines said Thursday that it has joined with Los Angeles ... \npartner, issued a statement saying it had agreed to buy a \"minority equity interest \nin a ... Air France and KLM Royal Dutch Airlines plan to create an aviation group \nthat ... The deal would unite Europe's second- and fourth-largest carriers under \none... [SEP] [SEP] Air France/KLM merger heralds further rationalisations and job cuts ... [SEP] Oct 7, 2003 ... Air France has announced that it is to merge with KLM, the Dutch airline. ... The \ntakeover will create Europe's largest airline, the "
}
],
"title": ""
},
{
"paragraphs": [
{
"qas": [
{
"question": "Mark Antony called her \"The Queen of Queens\"",
"id": "90c42d76f3dd44cf86ac3d5b945acee1",
"answers": [
{
"text": "Cleopatra",
"answer_start": 28
}
]
}
],
"context": "[SEP] [SEP] Eternal Egypt - Cleopatra the Seventh [SEP] According to Egyptian law, she was married to her brother, Ptolemy the ... in the \nautumn of 42 BC, led by Mark Antony and Octavian, also called Augustus. ... Mark \nAntony spent the winter of 41-40 BC with Cleopatra, enjoying himself and ... Days \nlater, Cleopatra was named the \"Queen of Queens\" and she distributed the... [SEP] [SEP] Full stop is a synonym for this punctuation mark # Quiz # Question ... [SEP] Jun 23, 2016 ... Ultimate Christmas Quiz With Mark | Zoella. 15:01. Zoella ... Mark Antony called \nher The Queen of Queens # Quiz # Question. 0:29. Amazing... [SEP] [SEP] Cleopatra and Mark Antony - Ancient Egypt Online [SEP] Mark Anthony returned to Egypt with Cleopatra and her father briefly and .... Mark \nAntony proclaimed Cleopatra the \"Queen of Queens\" and claimed that he, not... [SEP] [SEP] Tamar of Georgia - Wikipedia [SEP] Tamar the Great (Georgian: ) ( c. 1160 18 January 1213) reigned as \nKing of Georgia .... Tamar is occasionally called dedop'ali in the Georgian \nchronicles and on some charters. Thus, the title of mep'e might have been \napplied to Tamar to mark out her unique position among women. ..... Eastmond, \nAntony (1998). [SEP] [SEP] Caesar and Cleopatra (1945) Movie Script | SS [SEP] Their chief is called Julius Caesar. .... He will know Cleopatra by her pride, her \ncourage, her majesty, and her beauty. ... Therefore the gods sent a stranger, one \nMark Antony, a Roman captain of horsemen, across the sands of the desert and \nhe set my father ..... First, to deliver to you a present from the Queen of Queens. [SEP] [SEP] Drama: Caesar and Cleopatra [SEP] Their chief is called Julius Caesar. His father was a tiger ...... [She runs out \nthrough the loggia, kissing her hand to Mark Antony across the sea]. CAESAR [\ngoing ...... APOLLODORUS. First, to deliver to you a present from the Queen of \nQueens. [SEP] [SEP] We the Women of the World | women who made history society and ... [SEP] Jun 5, 2016 ... The queen of queens ... She continued to see her son as the heir of Egypt and \nRome together as ... The East was under Marc Antony lieutenant of Caesar who \nsupported the son of Cleopatra as heir of his former general. [SEP] [SEP] The Queen's birthday: 90 years of magic and majesty - Hello! [SEP] Apr 16, 2016 ... Elizabeth was third in line to the throne after her uncle Prince Edward and her \nfather. ... Around this time, the family also acquired a Corgi called Dookie who .... \nphotographer Antony Armstrong-Jones in 1960, although her choice of ... holding \nher first grandchild was released to mark her 52nd birthday. [SEP] [SEP] Drink Like an Egyptian: Cleopatra | Femme du Coupe [SEP] Jul 19, 2012 ... Many would agree that Cleopatra is known for her extravagant ... Known as the \nQueen of Queens, Cleo didn't just play the part (Ehem ... We're guessing that \nduring her tumultuous relationships with Caesar and Mark Antony,... [SEP] [SEP] Cleopatra the Irresistible - schaakstukken museum [SEP] Mar 21, 2014 ... Cleopatra swept Mark Anthony of his feet with her royal charm and ... of her \npartner and three years later she was called the 'Queen of Queens'. [SEP] [SEP] 1963: The Movies as Art - Columbia Journalism School Centennial [SEP] Apr 15, 2012 ... Kenneth Haigh is briefly interesting as Brutus, but of the other ... particularly when \nshe is trying to portray the Queen of Queens. ... Even in their most dramatic \nmoment, when Cleopatra and Antony are slapping each other around in her .... A \nrecent critic called Liebling's harrowing and definitive account for... [SEP] [SEP] All The Weird Tom Cruise Stories From Leah Remini'S Book About ... [SEP] Leah Remini, the queen of Queens, has written a barn-burner. ... and Katie \ninvited her to their wedding and then asked her to invite Jennifer Lopez and Marc \nAnthony. .... Yes the major religions have so-called \"thoughtful philosophies\" now"
}
],
"title": ""
},
{
"paragraphs": [
{
"qas": [
{
"question": "Built in 312 B.C. to link Rome & the South of Italy, it's still in use today",
"id": "45c25b136e9947309d3f3199a5eac397",
"answers": [
{
"text": "the Appian Way",
"answer_start": 1386
}
]
}
],
"context": "[SEP] [SEP] Appian Way - Wikipedia [SEP] The Appian Way was one of the earliest and strategically most important Roman \nroads of the ancient republic. It connected Rome to Brindisi, in southeast Italy. Its \nimportance is indicated by its common name, recorded by Statius: ... In 312 BC, \nAppius Claudius Caecus became censor at Rome. He was ..... External links[edit]\n... [SEP] [SEP] The Appian Way. Built in 312 BC and still used today. Rome to ... [SEP] It was built in 312 B.C. by Appius Claudius Caecus. ... Large stones made up the \nbulk of its construction and a softer gravel that was ..... Link to more pictures. ..... \nthe Via Appia, the Roman road connecting Rome to Brindisi, Puglia, southern \nItaly. .... The Appian way, Rome, Italy, built 2024 years ago, still in regular use \ntoday. [SEP] [SEP] Appian Way, Rome - A View On Cities [SEP] The Via Appia, a historic road built by the Romans in 312 BC. ... the port city of \nBrindisi on Italy's southeast coast, 560 km from Rome (about 350 miles). ... The \nVia Appia was lined with such monuments and many of them are still visible \ntoday. ... The Villa dei Quintili, with its ancient baths and beautiful friezes and \nsculptures is... [SEP] [SEP] Appian Way | ancient road, Italy | Britannica.com [SEP] At first it ran only 132 miles (212 km) from Rome south-southeastward to ancient \nCapua, ... Remains of Roman tombs lining the Appian Way (begun 312 BC), \nRome. ... Nor did Trajan neglect Italy's highway network: he built a new road (Via \n... which was laid out by Appius Claudius Caecus in 312 to connect Rome to \nCapua. [SEP] [SEP] Rome's Appian Way: The Perfect Springtime Stroll - Revealed Rome [SEP] Apr 4, 2012 ... The Appian Way (or, to Italians, Via Appia) was built all the way back in 312 B.C. \nAnd it was crucial. The first road linking farther-flung parts of... [SEP] [SEP] Appian Way - Jeff Bondono's Page [SEP] Location: The road that runs south from Rome, starting at the Porta San ... the first \n35-mile-long section as a military road to the south in 312 BC during the Samnite \n... of over 400 miles to the port city of Brindisi at the south of Italy, from which \nships ... A new Appian Way named Via Appia Nuova was built in parallel with the \nold... [SEP] [SEP] More info: APPIA ANTICA AND AQUEDUCTS PARK | Wheely Bike ... [SEP] Useful Link ... At the era of Emperor Augustus, in the I century BC, the historian \nDionigi di ... the Aqua Appia, both made by the censor Appio Claudio Cieco in \n312 BC. It is the period of Roman expansion towards the south of Italy, and large \n.... It was the first road built with innovative techniques that we still use today. [SEP] [SEP] Monuments: Ancient Rome - Resources for Ancient Biblical Studies [SEP] It was built in 29 BC by consul C. Statilius Taurus. ... Its length was 22,172 passus\n, of which only 358 were on arches; and its ... Aqua Appia This aqueduct was built \nin 312 B.C. It was built during the Roman Republic, by Appius Claudius Caecus. \n..... (today's Ponte Cestio) is an ancient Roman bridge still remaining today,... [SEP] [SEP] Cultural links between India and the Greco-Roman world (Article ... [SEP] Feb 12, 2011 ... Cyrus the Great (558-530 BC) built the first universal empire, stretching ... \nBabylon, but, supported by Ptolemy, he was able to return in 312 BC. .... of Iran, \nfrom which trade routes stretched to the south and still farther cast to China. .... It \nwas in use from the middle of the 3rd century BC until it died out in its... [SEP] [SEP] Roman Roads: Via Appia historical notes [SEP] An amatorial search of the visible traces of the Roman Roads Network. ... via \nAppia was built in several instalments between 312 BC and 191 BC to connect \nRome to Brindisi, following "
}
],
"title": ""
},
{
"paragraphs": [
{
"qas": [
{
"question": "Objects that pass closer to the sun than Mercury have been named for this mythological figure",
"id": "2a26ba2446794ed38d4633091a001a4d",
"answers": [
{
"text": "ICARUS",
"answer_start": 159
}
]
}
],
"context": "[SEP] [SEP] Free Flashcards about ASTRONOMY - StudyStack [SEP] OBJECTS THAT PASS CLOSER TO THE SUN THAN MERCURY HAVE BEEN \nNAMED FOR THIS MYTHOLOGICAL FIGURE, ICARUS. IN 1582 THE MAN... [SEP] [SEP] Vulcan (hypothetical planet) - Wikipedia [SEP] Vulcan is a small hypothetical planet that was proposed to exist in an orbit \nbetween Mercury and the Sun. Attempting to explain peculiarities of Mercury's \norbit, the 19th-century French mathematician Urbain Le Verrier hypothesized that \nthey were the result of another planet, which he named \"Vulcan\". ... Other than \nMercury, asteroid 2007 EB26, whose orbit has a semi-major axis... [SEP] [SEP] Mercury Facts: Interesting Facts about the Planet Mercury [SEP] Mercury is the closest planet to the Sun and is also the smallest of the eight \nplanets ... years or so, Mercury can be seen from Earth passing across the face of \nthe Sun. ... Mercury has been known to humanity since ancient times and \nalthough its .... dense than the Earth's atmosphere, Mercury's is closer to a true \nvacuum than... [SEP] [SEP] Mythology of the Planets - Universe Today [SEP] Mercury gets its name from the winged messenger of the gods. ... The only \nobjects in our Solar System brighter than Venus are the Sun and the Moon. ... \nEarth is the only planet not named after a Roman god or goddess, but it is \nassociated with the goddess ... Astronomy Cast has episodes on all the planets \nincluding Saturn. [SEP] [SEP] Astronomical Myths of Mercury & the Sun | holoscience.com | The ... [SEP] Jan 14, 2008 ... For more than 30 years we have virtually ignored Mercury. ... believe that \nMessenger could crack some of them on its first pass. .... The partitioning usually \nincludes smaller objects 'hot' gas giants so-called because they have been \n.... If Mercury formed close to the sun, there shouldn't be much iron... [SEP] [SEP] Space Today Online -- Solar System Kuiper Belt mystery objects ... [SEP] The newly-discovered object is more distant than the mysterious planetoid Sedna \n... If Eris ever had been close to the Sun, the methane ice would have been \nboiled off. ... The so-called terrestrial planets Mercury, Venus, Earth, Mars are \n... They named the object Sedna after the mythical Inuit goddess of the sea. It is \nsaid... [SEP] [SEP] Planets of our Solar System :: The Planets Today [SEP] Mercury has almost no atmosphere and is blasted by the Sun during the day and \n... entered Mercury's Orbit on 18th March 2011, the first man made object ever to \ndo so. ... As it orbits, Venus comes closer to Earth than any other planet in the \nsolar .... Mars is named after the Roman god of war and has been known since... [SEP] [SEP] Hypothetical Planets - Views of the Solar System [SEP] Vulcan, the intra-Mercurial planet; Mercury's Moon; Neith, the Moon of Venus; \nThe Earth's ... No new objects brighter than 9th magnitude were found near the \nSun. ... Comets have been observed to pass close enough to the Sun and ..... In \n1905, Pickering though he had discovered a tenth moon, which he named \nThemis. [SEP] [SEP] Greek Mythology - Myth Encyclopedia - god, story, legend, names ... [SEP] These figures inhabited a realm that stretched beyond the Greek landscape to \nthe ... Heroes and ordinary humans in Greek myths frequently discovered that \nthings were not ... Many have been passed down from ancient times in more than \none version. .... Hermes (Roman Mercury) was the son of Zeus and yet another \nTitan. [SEP] [SEP] Wikijunior:Solar System/Beck Foundation - Wikibooks, open books ... [SEP] Since then, humans have been launching vehicles into space to explore the ... \nThe probes send back information to Earth that scientists study to figure out what \nit means. .... The rest of the things in the Solar System orbit (travel around) the \nSun. .... These are the Sun, our Moon, Mercury, Venus, Mars, Jupiter, and Saturn. [SEP] [SEP] Asteroids.htm - Cosmic Elk [SEP] Since it was launched more satellites "
}
],
"title": ""
},
{
"paragraphs": [
{
"qas": [
{
"question": "In 2004 United launched this new service that features low fares & more seats per plane",
"id": "4bce3bce06ad4b0d85022ac9e7d25bed",
"answers": [
{
"text": "Ted",
"answer_start": 2817
}
]
}
],
"context": "[SEP] [SEP] Jeopary Questions page 2246 - -O-O-O - TriviaBistro.com [SEP] AIRLINE TRAVEL: In 2004 United launched this new service that features low \nfares & more seats per plane AIRLINE TRAVEL: In 2004 United launched this... [SEP] [SEP] Low-cost carrier - Wikipedia [SEP] A low-cost carrier or low-cost airline is an airline that generally has lower fares \nand fewer ... Most low-cost carriers operate aircraft configured with a single \npassenger ... new aircraft is usually more expensive than second-hand, new \nplanes are .... low-cost airlines differ in service offerings, by definition they feature \nmost of the... [SEP] [SEP] United p.s. - Wikipedia [SEP] United p.s. is a premium service offered by United Airlines on flights between \nNewark Liberty ... Once the airline began to bounce back, United launched TED, \na low cost ... p.s., a luxury service between JFK and SFO and LAX, in October \n2004. ... seats that feature 36 inches of pitch and 7 inches of recline, 5 inches \nmore pitch... [SEP] [SEP] United Airlines - Wikipedia [SEP] United Airlines, Inc., commonly referred to as United, is a major American airline \nheadquartered .... The carrier launched a new, all coach, low-cost carrier named \nTed in 2003, and ... On March 3, 2012, United & Continental merged their \npassenger service .... UAL sold or spun off most of its assets not related to its core \nairline... [SEP] [SEP] Qantas Fact Files [SEP] Sep 1, 2010 ... Jetstar Pacific; a 46 per cent interest in Air Pacific and an interest in Jetset .... new \nlevels of luxury and comfort to the Australia-UK service from .... JETSTAR \nLaunched as a low fare domestic airline in 2004, Jetstar ... LOYALTY The Qantas \nFrequent Flyer program has more than 7.2 .... United Arab Emirates*. [SEP] [SEP] Facts and stats - Newsroom - Jetstar [SEP] Jan 12, 2015 ... Jetstar Group The Jetstar Group is one of the largest low cost airline groups in ... \nand has flown more than 140 million passengers since it launched in 2004. ... \nJetstar Airways in Australia and New Zealand (wholly owned by the Qantas ... \nAirlines, with the Qantas Group holding 30 per cent); Jetstar Japan,... [SEP] [SEP] Low Cost Carriers: How Are They Changing - Carleton University [SEP] In spite of this, by the mid 1990's, a new breed of airlines called low cost .... \nSouthwest Airline is not only the largest domestic airline in the United states, but \n.... borrowed features from the LCC business model to downgrade their product \n.... per. Seat kilometer. Cost per seat km for full Service carrier. 6.96. Cost per seat \nkm for... [SEP] [SEP] Airlines within airlines: an analysis of US network airline responses [SEP] The establishment of Low Cost Carriers offshoots by network carriers has three ... \nThe network model for scheduled airline service has delivered market ..... Ted \nwas launched by United in early 2004, using two class, older, B737-300 aircraft ... \nsome of the new entrant LCC airlines head on in more key markets: Song against\n. [SEP] [SEP] History - About Air New Zealand - Company Information - Air New ... [SEP] Company history of Air New Zealand including the pacific coral route, ... Air New \nZealand became the first airline to introduce everyday, low-cost ... a new \npremium economy service offers additional leg room, more seat recline ... 2004 \ndirect services between Auckland and San Francisco were launched, ... United \nStates site [SEP] [SEP] History and Milestone - Philippine Airlines [SEP] PAL resumes post-World War II operations with services to 15 domestic points. ... \nThe new aircraft enabled PAL to reduce the trans-Pacific crossing to 30 ... and \nsunrise, offered the lowest fares in the world at P 0.10 per seat mile. .... February \n2004 ... Philippine Airlines emerges as the most trusted airline brand for Filipino... [SEP] [SEP] Strategies to Fight Low-Cost Rivals - Harvard Business Review [SEP] Such companies offer products and services at prices dramatically lower than the \n... Others take the offensive by launching low-cost businesses of their own. .... \ncosts"
}
],
"title": ""
},
{
"paragraphs": [
{
"qas": [
{
"question": "This band's \"Train In Vain\" was a hidden track on its original 1979 \"London Calling\" album",
"id": "483db64145af4cf2955ec689c62d2ab5",
"answers": [
{
"text": "The Clash",
"answer_start": 100
}
]
}
],
"context": "[SEP] [SEP] Train in Vain - Wikipedia [SEP] \"Train in Vain\" is a song by the British punk rock band The Clash. It was released as the third and final single from their third studio album, London Calling (1979). ... In the US, the song's title is expanded to \"Train in Vain (Stand by Me)\", as the words \"stand by me\" dominate the chorus. [SEP] [SEP] London Calling - Wikipedia [SEP] London Calling is the third studio album by English punk rock band the Clash. It \nwas released ... London Calling was a top ten album in the UK, and its lead \nsingle \"London .... \"Train in Vain\", was originally excluded from the back cover's \ntrack listing. ... \"London Calling\" preceded the album with a 7 December 1979 \nrelease. [SEP] [SEP] Train in Vain (Stand By Me) by The Clash Songfacts [SEP] On the original vinyl copy of the album \"Train Is Vain\" isn't listed on the tracklisting \n... The band hastily tacked the song onto the end of the album just before vinyl ... \nlive favorite for the band, introduced to their live set in December 1979 and being \n.... airplay.....i think u got the 3 song wrong scrap train in vain 4 london calling. [SEP] [SEP] Perfect Sound Forever: The Clash [SEP] The Clash- the story behind 'Train in Vain' ... The book includes chapters on the \nwriting and recording of the album, on the packaging and promotion, ... But at its \ncore is a section telling the tale behind each of London Calling's 19 tracks. ... \n1979 Take the 5th tour of the USA, upon the band's return to London, Kosmo \nbegan... [SEP] [SEP] Train in Vain - Rock Music Wiki - Wikia [SEP] \"Train in Vain\" is a song by the British punk rock band The Clash. ... It was \nreleased as the third and final single from their third studio album, London \nCalling (1979). ... on the album's track listing, appearing as a hidden track at the \nend of the album. ... although its presence is announced as the title and position \non the original... [SEP] [SEP] The Clash's 'London Calling': Album Review | Billboard [SEP] Any punk band worth its leather and studs can do dystopian, apocalyptic angst. ... \nOn the opening title track of its third album, London Calling -- a rock'n'roll \nlandmark ... 14, 1979 -- The Clash approached doomsday as only it could. ... last \nsecond, Train In Vain wasn't listed on the back of the original sleeve, but that \ndidn't... [SEP] [SEP] Train in Vain The Clash 1979 | seventies music [SEP] Apr 19, 2012 ... The Clash were typical of so many English bands of the late 70's-early 80's ... \nCalling and Clampdown) from The Clash's third album, London's Calling. Train in \nVain was not mentioned on the album's original track listing, ... to the song by The \nSlits, Typical Girls, which talks about girls standing by their men. [SEP] [SEP] London Calling by The Clash | Classic Rock Review [SEP] Dec 20, 2014 ... Album review of London Calling by The Clash. Part of Classic Rock Review's \ncelebration of the 35th anniversary of 1979 albums. ... The band began to work \non their third album during the summer of 1979 at a ... Train in Vain ... a hidden \ntrack because it was not listed on the original album sleeve. [SEP] [SEP] Train in Vain - Revolvy [SEP] \" Train in Vain \" is a song by the British punk rock band The Clash . It was \nreleased as the third and final single from their third studio album, London \nCalling (1979). ... A couple of Clash Web sites describe it as a hidden track, but it \nwasn't ... although its presence is announced as the title and position on the \noriginal vinyl... [SEP] [SEP] London Calling - Revolvy [SEP] London Calling is the third studio album by English punk rock band the Clash . ... \n:91 The Clash arrived at Vanilla in May 1979 without "
}
],
"title": ""
}
]
}
因为 它太大了无法显示 source diff 。你可以改为 查看blob
import paddlepalm as palm
if __name__ == '__main__':
max_seqlen = 512
batch_size = 32
match_reader = palm.reader.match(train_file, vocab, \
max_seqlen, file_format='csv', tokenizer='wordpiece', \
lang='en', shuffle_train=True)
mrc_reader = palm.reader.mrc(train_file, phase='train')
mlm_reader = palm.reader.mlm(train_file, phase='train')
palm.reader.
match = palm.tasktype.cls(num_classes=4)
mrc = palm.tasktype.match(learning_strategy='pairwise')
mlm = palm.tasktype.mlm()
mlm.print()
bb_flags = palm.load_json('./pretrain/ernie/ernie_config.json')
bb = palm.backbone.ernie(bb_flags['xx'], xxx)
bb.print()
match4mrqa = palm.Task('match4mrqa', match_reader, match_tt)
mrc4mrqa = palm.Task('match4mrqa', match_reader, match_tt)
# match4mrqa.reuse_with(mrc4mrqa)
controller = palm.Controller([mrqa, match4mrqa, mlm4mrqa])
loss = controller.build_forward(bb, mask_task=[])
n_steps = controller.estimate_train_steps(basetask=mrqa, num_epochs=2, batch_size=8, dev_count=4)
adam = palm.optimizer.Adam(loss)
sched = palm.schedualer.LinearWarmup(learning_rate, max_train_steps=n_steps, warmup_steps=0.1*n_steps)
controller.build_backward(optimizer=adam, schedualer=sched, weight_decay=0.001, use_ema=True, ema_decay=0.999)
controller.random_init_params()
controller.load_pretrain('../../pretrain_model/ernie/params')
controller.train()
# controller = palm.Controller(config='config.yaml', task_dir='tasks', for_train=False)
# controller.pred('mrqa', inference_model_dir='output_model/secondrun/mrqa/infer_model')
train_file: "data/match4mrqa/train.tsv"
reader: match
paradigm: match
train_file: "data/mlm4mrqa/train.tsv"
reader: mlm
paradigm: mlm
train_file: data/mrqa/train.json
pred_file: data/mrqa/dev.json
pred_output_path: 'mrqa_output'
reader: mrc
paradigm: mrc
doc_stride: 128
max_query_len: 64
max_answer_len: 30
n_best_size: 20
../../pretrain/
\ No newline at end of file
train_file: "data/cls4mrqa/train.tsv"
reader: cls
paradigm: cls
n_classes: 4
train_file: "data/cls4mrqa/train.tsv"
reader: cls
paradigm: cls
n_classes: 4
train_file: "data/cls4mrqa/train.tsv"
reader: cls
paradigm: cls
n_classes: 4
train_file: "data/cls4mrqa/train.tsv"
reader: cls
paradigm: cls
n_classes: 4
train_file: "data/cls4mrqa/train.tsv"
reader: cls
paradigm: cls
n_classes: 4
train_file: "data/cls4mrqa/train.tsv"
reader: cls
paradigm: cls
n_classes: 4
## Examples 1: Classification
This task is a sentiment analysis task. The following sections detail model preparation, dataset preparation, and how to run the task.
### Step 1: Prepare Pre-trained Models & Datasets
#### Pre-trianed Model
The pre-training model of this mission is: [ernie-zh-base](https://github.com/PaddlePaddle/PALM/tree/r0.3-api).
Make sure you have downloaded the required pre-training model in the current folder.
#### Dataset
This task uses the `chnsenticorp` dataset.
Download dataset:
```shell
python download.py
```
If everything goes well, there will be a folder named `data/` created with all the datas in it.
The data should have 2 fields, `label text_a`, with tsv format. Here is some example datas:
```
label text_a
0 当当网名不符实,订货多日不见送货,询问客服只会推托,只会要求用户再下订单。如此服务留不住顾客的。去别的网站买书服务更好。
0 XP的驱动不好找!我的17号提的货,现在就降价了100元,而且还送杀毒软件!
1 <荐书> 推荐所有喜欢<红楼>的红迷们一定要收藏这本书,要知道当年我听说这本书的时候花很长时间去图书馆找和借都没能如愿,所以这次一看到当当有,马上买了,红迷们也要记得备货哦!
```
### Step 2: Train & Predict
The code used to perform classification task is in `run.py`. If you have prepared the pre-training model and the data set required for the task, run:
```shell
python run.py
```
If you want to specify a specific gpu or use multiple gpus for training, please use **`CUDA_VISIBLE_DEVICES`**, for example:
```shell
CUDA_VISIBLE_DEVICES=0,1,2 python run.py
```
Some logs will be shown below:
```
step 1/154 (epoch 0), loss: 5.512, speed: 0.51 steps/s
step 2/154 (epoch 0), loss: 2.595, speed: 3.36 steps/s
step 3/154 (epoch 0), loss: 1.798, speed: 3.48 steps/s
```
After the run, you can view the saved models in the `outputs/` folder and the predictions in the `outputs/predict` folder. Here are some examples of predictions:
```
{"index": 0, "logits": [-0.2014336884021759, 0.6799028515815735], "probs": [0.29290086030960083, 0.7070990800857544], "label": 1}
{"index": 1, "logits": [0.8593899011611938, -0.29743513464927673], "probs": [0.7607553601264954, 0.23924466967582703], "label": 0}
{"index": 2, "logits": [0.7462944388389587, -0.7083730101585388], "probs": [0.8107157349586487, 0.18928426504135132], "label": 0}
```
### Step 3: Evaluate
Once you have the prediction, you can run the evaluation script to evaluate the model:
```shell
python evaluate.py
```
The evaluation results are as follows:
```
precision: 0.956666666667, recall: 0.949013157895, f1: 0.95688225039
```
# -*- coding: utf-8 -*-
import os
import requests
import tarfile
import shutil
from tqdm import tqdm
def download(src, url):
file_size = int(requests.head(url).headers['Content-Length'])
header = {
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/'
'70.0.3538.67 Safari/537.36'
}
pbar = tqdm(total=file_size)
resp = requests.get(url, headers=header, stream=True)
with open(src, 'ab') as f:
for chunk in resp.iter_content(chunk_size=1024):
if chunk:
f.write(chunk)
pbar.update(1024)
pbar.close()
return file_size
abs_path = os.path.abspath(__file__)
download_url = "https://ernie.bj.bcebos.com/task_data_zh.tgz"
downlaod_path = os.path.join(os.path.dirname(abs_path), "task_data_zh.tgz")
target_dir = os.path.dirname(abs_path)
download(downlaod_path, download_url)
tar = tarfile.open(downlaod_path)
tar.extractall(target_dir)
os.remove(downlaod_path)
abs_path = os.path.abspath(__file__)
dst_dir = os.path.join(os.path.dirname(abs_path), "data")
if not os.path.exists(dst_dir) or not os.path.isdir(dst_dir):
os.makedirs(dst_dir)
for file in os.listdir(os.path.join(target_dir, 'task_data', 'chnsenticorp')):
shutil.move(os.path.join(target_dir, 'task_data', 'chnsenticorp', file), dst_dir)
shutil.rmtree(os.path.join(target_dir, 'task_data'))
# -*- coding: utf-8 -*-
import json
import numpy as np
def accuracy(preds, labels):
preds = np.array(preds)
labels = np.array(labels)
return (preds == labels).mean()
def f1(preds, labels):
preds = np.array(preds)
labels = np.array(labels)
tp = np.sum((labels == '1') & (preds == '1'))
tn = np.sum((labels == '0') & (preds == '0'))
fp = np.sum((labels == '0') & (preds == '1'))
fn = np.sum((labels == '1') & (preds == '0'))
p = tp * 1.0 / (tp + fp)
r = tp * 1.0 / (tp + fn) * 1.0
f1 = (2 * p * r) / (p + r + 1e-8)
return f1
def recall(preds, labels):
preds = np.array(preds)
labels = np.array(labels)
# recall=TP/(TP+FN)
tp = np.sum((labels == '1') & (preds == '1'))
fn = np.sum((labels == '1') & (preds == '0'))
re = tp * 1.0 / (tp + fn)
return re
def res_evaluate(res_dir="./outputs/predict/predictions.json", eval_phase='test'):
if eval_phase == 'test':
data_dir="./data/test.tsv"
elif eval_phase == 'dev':
data_dir="./data/dev.tsv"
else:
assert eval_phase in ['dev', 'test'], 'eval_phase should be dev or test'
labels = []
with open(data_dir, "r") as file:
first_flag = True
for line in file:
line = line.split("\t")
label = line[0]
if label=='label':
continue
labels.append(str(label))
file.close()
preds = []
with open(res_dir, "r") as file:
for line in file.readlines():
line = json.loads(line)
pred = line['label']
preds.append(str(pred))
file.close()
assert len(labels) == len(preds), "prediction result doesn't match to labels"
print('data num: {}'.format(len(labels)))
print("precision: {}, recall: {}, f1: {}".format(accuracy(preds, labels), recall(preds, labels), f1(preds, labels)))
res_evaluate()
# coding=utf-8
import paddlepalm as palm
import json
from paddlepalm.distribute import gpu_dev_count
if __name__ == '__main__':
# configs
max_seqlen = 256
batch_size = 8
num_epochs = 10
lr = 5e-5
weight_decay = 0.01
vocab_path = './pretrain/ernie-zh-base/vocab.txt'
train_file = './data/train.tsv'
predict_file = './data/test.tsv'
config = json.load(open('./pretrain/ernie-zh-base/ernie_config.json'))
input_dim = config['hidden_size']
num_classes = 2
dropout_prob = 0.1
random_seed = 1
task_name = 'chnsenticorp'
save_path = './outputs/'
pred_output = './outputs/predict/'
save_type = 'ckpt'
print_steps = 20
pre_params = './pretrain/ernie-zh-base/params'
# ----------------------- for training -----------------------
# step 1-1: create readers for training
cls_reader = palm.reader.ClassifyReader(vocab_path, max_seqlen, seed=random_seed)
# step 1-2: load the training data
cls_reader.load_data(train_file, batch_size, num_epochs=num_epochs)
# step 2: create a backbone of the model to extract text features
ernie = palm.backbone.ERNIE.from_config(config)
# step 3: register the backbone in reader
cls_reader.register_with(ernie)
# step 4: create the task output head
cls_head = palm.head.Classify(num_classes, input_dim, dropout_prob)
# step 5-1: create a task trainer
trainer = palm.Trainer(task_name)
# step 5-2: build forward graph with backbone and task head
loss_var = trainer.build_forward(ernie, cls_head)
# step 6-1*: use warmup
n_steps = cls_reader.num_examples * num_epochs // batch_size
warmup_steps = int(0.1 * n_steps)
sched = palm.lr_sched.TriangularSchedualer(warmup_steps, n_steps)
# step 6-2: create a optimizer
adam = palm.optimizer.Adam(loss_var, lr, sched)
# step 6-3: build backward
trainer.build_backward(optimizer=adam, weight_decay=weight_decay)
# step 7: fit prepared reader and data
trainer.fit_reader(cls_reader)
# step 8-1*: load pretrained parameters
trainer.load_pretrain(pre_params)
# step 8-2*: set saver to save model
# save_steps = n_steps // gpu_dev_count - batch_size
save_steps = 2396
trainer.set_saver(save_steps=save_steps, save_path=save_path, save_type=save_type)
# step 8-3: start training
trainer.train(print_steps=print_steps)
# ----------------------- for prediction -----------------------
# step 1-1: create readers for prediction
print('prepare to predict...')
predict_cls_reader = palm.reader.ClassifyReader(vocab_path, max_seqlen, seed=random_seed, phase='predict')
# step 1-2: load the training data
predict_cls_reader.load_data(predict_file, batch_size)
# step 2: create a backbone of the model to extract text features
pred_ernie = palm.backbone.ERNIE.from_config(config, phase='predict')
# step 3: register the backbone in reader
predict_cls_reader.register_with(pred_ernie)
# step 4: create the task output head
cls_pred_head = palm.head.Classify(num_classes, input_dim, phase='predict')
# step 5: build forward graph with backbone and task head
trainer.build_predict_forward(pred_ernie, cls_pred_head)
# step 6: load pretrained model
# model_path = './outputs/ckpt.step'+str(save_steps)
model_path = './outputs/ckpt.step'+str(11980)
pred_ckpt = trainer.load_ckpt(model_path)
# step 7: fit prepared reader and data
trainer.fit_reader(predict_cls_reader, phase='predict')
# step 8: predict
print('predicting..')
trainer.predict(print_steps=print_steps, output_dir=pred_output)
## Examples 2: Mathing
This task is a sentence pair matching task. The following sections detail model preparation, dataset preparation, and how to run the task.
### Step 1: Prepare Pre-trained Models & Datasets
#### Pre-trianed Model
The pre-training model of this mission is: [ernie-en-base](https://github.com/PaddlePaddle/PALM/tree/r0.3-api).
Make sure you have downloaded the required pre-training model in the current folder.
#### Dataset
This task uses the `Quora Question Pairs matching` dataset.
Download dataset:
```shell
python download.py
```
After the dataset is downloaded, you should convert the data format for training:
```shell
python process.py quora_duplicate_questions.tsv train.tsv test.tsv
```
If everything goes well, there will be a folder named `data/` created with all the converted datas in it.
The data should have 3 fields, `text_a text_b label`, with tsv format. Here is some example datas:
```
text_a text_b label
How can the arrangement of corynebacterium xerosis be described? How would you describe waves? 0
How do you fix a Google Play Store account that isn't working? What can cause the Google Play store to not open? How are such probelms fixed? 1
Which is the best earphone under 1000? What are the best earphones under 1k? 1
What are the differences between the Dell Inspiron 3000, 5000, and 7000 series laptops? "Should I buy an Apple MacBook Pro 15"" or a Dell Inspiron 17 5000 series?" 0
```
### Step 2: Train & Predict
The code used to perform classification task is in `run.py`. If you have prepared the pre-training model and the data set required for the task, run:
```shell
python run.py
```
If you want to specify a specific gpu or use multiple gpus for training, please use **`CUDA_VISIBLE_DEVICES`**, for example:
```shell
CUDA_VISIBLE_DEVICES=0,1,2 python run.py
```
Some logs will be shown below:
```
step 20/49087 (epoch 0), loss: 1.079, speed: 3.48 steps/s
step 40/49087 (epoch 0), loss: 1.251, speed: 5.18 steps/s
step 60/49087 (epoch 0), loss: 1.193, speed: 5.04 steps/s
```
After the run, you can view the saved models in the `outputs/` folder and the predictions in the `outputs/predict` folder. Here are some examples of predictions:
```
{"index": 0, "logits": [-0.32688724994659424, -0.8568955063819885], "probs": [0.629485011100769, 0.3705149292945862], "label": 0}
{"index": 1, "logits": [-0.2735646963119507, -0.7983021140098572], "probs": [0.6282548904418945, 0.37174513936042786], "label": 0}
{"index": 2, "logits": [-0.3381381630897522, -0.8614270091056824], "probs": [0.6279165148735046, 0.37208351492881775], "label": 0}
```
### Step 3: Evaluate
Once you have the prediction, you can run the evaluation script to evaluate the model:
```shell
python evaluate.py
```
The evaluation results are as follows:
```
precision: 0.857906976744, recall: 0.824249846908, f1: 0.81501664653
```
# -*- coding: utf-8 -*-
import os
import requests
from tqdm import tqdm
def download(src, url):
file_size = int(requests.head(url).headers['Content-Length'])
header = {
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/'
'70.0.3538.67 Safari/537.36'
}
pbar = tqdm(total=file_size)
resp = requests.get(url, headers=header, stream=True)
with open(src, 'ab') as f:
for chunk in resp.iter_content(chunk_size=1024):
if chunk:
f.write(chunk)
pbar.update(1024)
pbar.close()
return file_size
abs_path = os.path.abspath(__file__)
data_dir = os.path.join(os.path.dirname(abs_path), "data")
if not os.path.exists(data_dir) or not os.path.isdir(data_dir):
os.makedirs(data_dir)
download_url = "http://qim.fs.quoracdn.net/quora_duplicate_questions.tsv"
downlaod_path = os.path.join(data_dir, "quora_duplicate_questions.tsv")
download(downlaod_path, download_url)
# -*- coding: utf-8 -*-
import json
import numpy as np
def accuracy(preds, labels):
preds = np.array(preds)
labels = np.array(labels)
return (preds == labels).mean()
def f1(preds, labels):
preds = np.array(preds)
labels = np.array(labels)
tp = np.sum((labels == '1') & (preds == '1'))
tn = np.sum((labels == '0') & (preds == '0'))
fp = np.sum((labels == '0') & (preds == '1'))
fn = np.sum((labels == '1') & (preds == '0'))
p = tp * 1.0 / (tp + fp)
r = tp * 1.0 / (tp + fn) * 1.0
f1 = (2 * p * r) / (p + r + 1e-8)
return f1
def recall(preds, labels):
preds = np.array(preds)
labels = np.array(labels)
# recall=TP/(TP+FN)
tp = np.sum((labels == '1') & (preds == '1'))
fn = np.sum((labels == '1') & (preds == '0'))
re = tp * 1.0 / (tp + fn)
return re
def res_evaluate(res_dir="./outputs/predict/predictions.json", eval_phase='test'):
if eval_phase == 'test':
data_dir="./data/test.tsv"
elif eval_phase == 'dev':
data_dir="./data/dev.tsv"
else:
assert eval_phase in ['dev', 'test'], 'eval_phase should be dev or test'
labels = []
with open(data_dir, "r") as file:
first_flag = True
for line in file:
line = line.split("\t")
label = line[2][:-1]
if label=='label':
continue
labels.append(str(label))
file.close()
preds = []
with open(res_dir, "r") as file:
for line in file.readlines():
line = json.loads(line)
pred = line['label']
preds.append(str(pred))
file.close()
assert len(labels) == len(preds), "prediction result({}) doesn't match to labels({})".format(len(preds),len(labels))
print('data num: {}'.format(len(labels)))
print("precision: {}, recall: {}, f1: {}".format(accuracy(preds, labels), recall(preds, labels), f1(preds, labels)))
res_evaluate()
# -*- coding: utf-8 -*-
import sys
import os
if len(sys.argv) != 4:
exit(0)
data_dir = sys.argv[1]
if not os.path.exists(data_dir):
print("%s not exists" % data_dir)
exit(0)
train_dir = sys.argv[2]
train_file = open(train_dir, "w")
train_file.write("text_a\ttext_b\tlabel\n")
test_dir = sys.argv[3]
test_file = open(test_dir, "w")
test_file.write("text_a\ttext_b\tlabel\n")
with open(data_dir, "r") as file:
before = ""
cnt = 0
for line in file:
line = line.strip("\n")
line_t = line.split("\t")
flag = 0
if len(line_t) < 6:
if flag:
flag = 0
out_line = "{}{}\n".format(out_line, line)
else:
flag = 1
outline = "{}".format(line)
continue
else:
out_line = "{}\t{}\t{}\n".format(line_t[3], line_t[4], line_t[5])
cnt += 1
if 2 <= cnt <= 4301:
test_file.write(out_line)
if 4301 <= cnt <= 104301:
train_file.write(out_line)
train_file.close()
test_file.close()
# coding=utf-8
import paddlepalm as palm
import json
from paddlepalm.distribute import gpu_dev_count
if __name__ == '__main__':
# configs
max_seqlen = 128
batch_size = 16
num_epochs = 3
lr = 3e-5
weight_decay = 0.0
num_classes = 2
random_seed = 1
dropout_prob = 0.1
save_path = './outputs/'
save_type = 'ckpt'
pred_model_path = './outputs/ckpt.step'+str(18732)
print_steps = 50
pred_output = './outputs/predict/'
pre_params = './pretrain/ernie-en-base/params'
task_name = 'Quora Question Pairs matching'
vocab_path = './pretrain/ernie-en-base/vocab.txt'
train_file = './data/train.tsv'
predict_file = './data/test.tsv'
config = json.load(open('./pretrain/ernie-en-base/ernie_config.json'))
input_dim = config['hidden_size']
# ----------------------- for training -----------------------
# step 1-1: create readers for training
match_reader = palm.reader.MatchReader(vocab_path, max_seqlen, seed=random_seed)
# step 1-2: load the training data
match_reader.load_data(train_file, file_format='tsv', num_epochs=num_epochs, batch_size=batch_size)
# step 2: create a backbone of the model to extract text features
ernie = palm.backbone.ERNIE.from_config(config)
# step 3: register the backbone in reader
match_reader.register_with(ernie)
# step 4: create the task output head
match_head = palm.head.Match(num_classes, input_dim, dropout_prob)
# step 5-1: create a task trainer
trainer = palm.Trainer(task_name)
# step 5-2: build forward graph with backbone and task head
loss_var = trainer.build_forward(ernie, match_head)
# step 6-1*: use warmup
n_steps = match_reader.num_examples * num_epochs // batch_size
warmup_steps = int(0.1 * n_steps)
print('total_steps: {}'.format(n_steps))
print('warmup_steps: {}'.format(warmup_steps))
sched = palm.lr_sched.TriangularSchedualer(warmup_steps, n_steps)
# step 6-2: create a optimizer
adam = palm.optimizer.Adam(loss_var, lr, sched)
# step 6-3: build backward
trainer.build_backward(optimizer=adam, weight_decay=weight_decay)
# step 7: fit prepared reader and data
trainer.fit_reader(match_reader)
# step 8-1*: load pretrained parameters
trainer.load_pretrain(pre_params, False)
# step 8-2*: set saver to save model
# save_steps = (n_steps-16) // gpu_dev_count
save_steps = 6244
trainer.set_saver(save_path=save_path, save_steps=save_steps, save_type=save_type)
# step 8-3: start training
trainer.train(print_steps=print_steps)
# ----------------------- for prediction -----------------------
# step 1-1: create readers for prediction
print('prepare to predict...')
predict_match_reader = palm.reader.MatchReader(vocab_path, max_seqlen, seed=random_seed, phase='predict')
# step 1-2: load the training data
predict_match_reader.load_data(predict_file, batch_size)
# step 2: create a backbone of the model to extract text features
pred_ernie = palm.backbone.ERNIE.from_config(config, phase='predict')
# step 3: register the backbone in reader
predict_match_reader.register_with(pred_ernie)
# step 4: create the task output head
match_pred_head = palm.head.Match(num_classes, input_dim, phase='predict')
# step 5: build forward graph with backbone and task head
trainer.build_predict_forward(pred_ernie, match_pred_head)
# step 6: load pretrained model
pred_ckpt = trainer.load_ckpt(pred_model_path)
# step 7: fit prepared reader and data
trainer.fit_reader(predict_match_reader, phase='predict')
# step 8: predict
print('predicting..')
trainer.predict(print_steps=print_steps, output_dir=pred_output)
## Examples 4: Machine Reading Comprehension
This task is a machine reading comprehension task. The following sections detail model preparation, dataset preparation, and how to run the task.
### Step 1: Prepare Pre-trained Models & Datasets
#### Pre-trianed Model
The pre-training model of this mission is: [ernie-zh-base](https://github.com/PaddlePaddle/PALM/tree/r0.3-api).
Make sure you have downloaded the required pre-training model in the current folder.
#### Dataset
This task uses the `CMRC2018` dataset. `CMRC2018` is an evaluation conducted by Chinese information society. The task of evaluation is to extract reading comprehension.
Download dataset:
```shell
python download.py
```
If everything goes well, there will be a folder named `data/` created with all the datas in it.
Here is some example datas:
```json
"paragraphs": [
{
"id": "TRAIN_36",
"context": "NGC 6231是一个位于天蝎座的疏散星团,天球座标为赤经16时54分,赤纬-41度48分,视觉观测大小约45角分,亮度约2.6视星等,距地球5900光年。NGC 6231年龄约为三百二十万年,是一个非常年轻的星团,星团内的最亮星是5等的天蝎座 ζ1星。用双筒望远镜或小型望远镜就能看到个别的行星。NGC 6231在1654年被意大利天文学家乔瓦尼·巴蒂斯特·霍迪尔纳(Giovanni Battista Hodierna)以Luminosae的名字首次纪录在星表中,但是未见记载于夏尔·梅西耶的天体列表和威廉·赫歇尔的深空天体目录。这个天体在1678年被爱德蒙·哈雷(I.7)、1745年被夏西亚科斯(Jean-Phillippe Loys de Cheseaux)(9)、1751年被尼可拉·路易·拉卡伊(II.13)分别再次独立发现。",
"qas": [
{
"question": "NGC 6231的经纬度是多少?",
"id": "TRAIN_36_QUERY_0",
"answers": [
{
"text": "赤经16时54分,赤纬-41度48分",
"answer_start": 27
}
]
}
```
### Step 2: Train & Predict
The code used to perform classification task is in `run.py`. If you have prepared the pre-training model and the data set required for the task, run:
```shell
python run.py
```
If you want to specify a specific gpu or use multiple gpus for training, please use **`CUDA_VISIBLE_DEVICES`**, for example:
```shell
CUDA_VISIBLE_DEVICES=0,1,2 python run.py
```
Some logs will be shown below:
```
step 1/1515 (epoch 0), loss: 6.251, speed: 0.31 steps/s
step 2/1515 (epoch 0), loss: 6.206, speed: 0.80 steps/s
step 3/1515 (epoch 0), loss: 6.172, speed: 0.86 steps/s
```
After the run, you can view the saved models in the `outputs/` folder and the predictions in the `outputs/predict` folder. Here are some examples of predictions:
```json
{
"DEV_0_QUERY_0": "光 荣 和 ω-force 开 发",
"DEV_0_QUERY_1": "任 天 堂 游 戏 谜 之 村 雨 城",
"DEV_0_QUERY_2": "战 史 演 武 」&「 争 霸 演 武 」。",
"DEV_1_QUERY_0": "大 陆 传 统 器 乐 及 戏 曲 里 面 常 用 的 打 击 乐 记 谱 方 法 , 以 中 文 字 的 声 音 模 拟 敲 击 乐 的 声 音 , 纪 录 打 击 乐 的 各 种 不 同 的 演 奏 方 法 。",
"DEV_1_QUERY_1": "「 锣 鼓 点",
"DEV_1_QUERY_2": "锣 鼓 的 运 用 有 约 定 俗 成 的 程 式 , 依 照 角 色 行 当 的 身 份 、 性 格 、 情 绪 以 及 环 境 , 配 合 相 应 的 锣 鼓 点",
"DEV_1_QUERY_3": "鼓 、 锣 、 钹 和 板 四 类 型",
"DEV_2_QUERY_0": "364.6 公 里",
}
```
### Step 3: Evaluate
Once you have the prediction, you can run the evaluation script to evaluate the model:
```shell
python evaluate.py
```
The evaluation results are as follows:
```
data_num: 3219
em_sroce: 0.963031997515, f1: 83.9865402973
```
# -*- coding: utf-8 -*-
import os
import requests
import tarfile
import shutil
from tqdm import tqdm
def download(src, url):
file_size = int(requests.head(url).headers['Content-Length'])
header = {
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/'
'70.0.3538.67 Safari/537.36'
}
pbar = tqdm(total=file_size)
resp = requests.get(url, headers=header, stream=True)
with open(src, 'ab') as f:
for chunk in resp.iter_content(chunk_size=1024):
if chunk:
f.write(chunk)
pbar.update(1024)
pbar.close()
return file_size
abs_path = os.path.abspath(__file__)
download_url = "https://ernie.bj.bcebos.com/task_data_zh.tgz"
downlaod_path = os.path.join(os.path.dirname(abs_path), "task_data_zh.tgz")
target_dir = os.path.dirname(abs_path)
download(downlaod_path, download_url)
tar = tarfile.open(downlaod_path)
tar.extractall(target_dir)
os.remove(downlaod_path)
abs_path = os.path.abspath(__file__)
dst_dir = os.path.join(os.path.dirname(abs_path), "data")
if not os.path.exists(dst_dir) or not os.path.isdir(dst_dir):
os.makedirs(dst_dir)
for file in os.listdir(os.path.join(target_dir, 'task_data', 'cmrc2018')):
shutil.move(os.path.join(target_dir, 'task_data', 'cmrc2018', file), dst_dir)
shutil.rmtree(os.path.join(target_dir, 'task_data'))
# -*- coding: utf-8 -*-
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
'''
Evaluation script for CMRC 2018
version: v5
Note:
v5 formatted output, add usage description
v4 fixed segmentation issues
'''
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from __future__ import absolute_import
from collections import Counter, OrderedDict
import string
import re
import argparse
import json
import sys
import nltk
import pdb
# split Chinese with English
def mixed_segmentation(in_str, rm_punc=False):
in_str = in_str.lower().strip()
segs_out = []
temp_str = ""
sp_char = [
'-', ':', '_', '*', '^', '/', '\\', '~', '`', '+', '=', ',', '。', ':',
'?', '!', '“', '”', ';', '’', '《', '》', '……', '·', '、', '「', '」', '(',
')', '-', '~', '『', '』'
]
for char in in_str:
if rm_punc and char in sp_char:
continue
if re.search(r'[\u4e00-\u9fa5]', char) or char in sp_char:
if temp_str != "":
ss = nltk.word_tokenize(temp_str)
segs_out.extend(ss)
temp_str = ""
segs_out.append(char)
else:
temp_str += char
#handling last part
if temp_str != "":
ss = nltk.word_tokenize(temp_str)
segs_out.extend(ss)
return segs_out
# remove punctuation
def remove_punctuation(in_str):
in_str = in_str.lower().strip()
sp_char = [
'-', ':', '_', '*', '^', '/', '\\', '~', '`', '+', '=', ',', '。', ':',
'?', '!', '“', '”', ';', '’', '《', '》', '……', '·', '、', '「', '」', '(',
')', '-', '~', '『', '』'
]
out_segs = []
for char in in_str:
if char in sp_char:
continue
else:
out_segs.append(char)
return ''.join(out_segs)
# find longest common string
def find_lcs(s1, s2):
m = [[0 for i in range(len(s2) + 1)] for j in range(len(s1) + 1)]
mmax = 0
p = 0
for i in range(len(s1)):
for j in range(len(s2)):
if s1[i] == s2[j]:
m[i + 1][j + 1] = m[i][j] + 1
if m[i + 1][j + 1] > mmax:
mmax = m[i + 1][j + 1]
p = i + 1
return s1[p - mmax:p], mmax
#
def evaluate(ground_truth_file, prediction_file):
f1 = 0
em = 0
total_count = 0
skip_count = 0
for instances in ground_truth_file["data"]:
for instance in instances["paragraphs"]:
context_text = instance['context'].strip()
for qas in instance['qas']:
total_count += 1
query_id = qas['id'].strip()
query_text = qas['question'].strip()
answers = [ans["text"] for ans in qas["answers"]]
if query_id not in prediction_file:
print('Unanswered question: {}\n'.format(
query_id))
skip_count += 1
continue
prediction = prediction_file[query_id]
f1 += calc_f1_score(answers, prediction)
em += calc_em_score(answers, prediction)
f1_score = 100.0 * f1 / total_count
em_score = 100.0 * em / total_count
return f1_score, em_score, total_count, skip_count
def calc_f1_score(answers, prediction):
f1_scores = []
for ans in answers:
ans_segs = mixed_segmentation(ans, rm_punc=True)
prediction_segs = mixed_segmentation(prediction, rm_punc=True)
lcs, lcs_len = find_lcs(ans_segs, prediction_segs)
if lcs_len == 0:
f1_scores.append(0)
continue
precision = 1.0 * lcs_len / len(prediction_segs)
recall = 1.0 * lcs_len / len(ans_segs)
f1 = (2 * precision * recall) / (precision + recall)
f1_scores.append(f1)
return max(f1_scores)
def calc_em_score(answers, prediction):
em = 0
for ans in answers:
ans_ = remove_punctuation(ans)
prediction_ = remove_punctuation(prediction)
if ans_ == prediction_:
em = 1
break
return em
def eval_file(dataset_file, prediction_file):
ground_truth_file = json.load(open(dataset_file, 'r'))
prediction_file = json.load(open(prediction_file, 'r'))
F1, EM, TOTAL, SKIP = evaluate(ground_truth_file, prediction_file)
AVG = (EM + F1) * 0.5
return EM, F1, AVG, TOTAL
if __name__ == '__main__':
EM, F1, AVG, TOTAL = eval_file("task_data/cmrc2018/dev.json", "predictions.json")
print(EM)
print(F1)
print(TOTAL)
\ No newline at end of file
# coding=utf-8
import paddlepalm as palm
import json
from paddlepalm.distribute import gpu_dev_count
if __name__ == '__main__':
# configs
max_seqlen = 512
batch_size = 8
num_epochs = 8
lr = 3e-5
doc_stride = 128
max_query_len = 64
max_ans_len = 128
weight_decay = 0.01
print_steps = 20
vocab_path = './pretrain/ernie-zh-base/vocab.txt'
do_lower_case = True
train_file = './data/train.json'
predict_file = './data/dev.json'
save_path = './outputs/'
pred_output = './outputs/predict/'
save_type = 'ckpt'
task_name = 'cmrc2018'
pre_params = './pretrain/ernie-zh-base/params'
config = json.load(open('./pretrain/ernie-zh-base/ernie_config.json'))
# ----------------------- for training -----------------------
# step 1-1: create readers for training
mrc_reader = palm.reader.MRCReader(vocab_path, max_seqlen, max_query_len, doc_stride, do_lower_case=do_lower_case)
# step 1-2: load the training data
mrc_reader.load_data(train_file, file_format='json', num_epochs=num_epochs, batch_size=batch_size)
# step 2: create a backbone of the model to extract text features
ernie = palm.backbone.ERNIE.from_config(config)
# step 3: register the backbone in reader
mrc_reader.register_with(ernie)
# step 4: create the task output head
mrc_head = palm.head.MRC(max_query_len, config['hidden_size'], do_lower_case=do_lower_case, max_ans_len=max_ans_len)
# step 5-1: create a task trainer
trainer = palm.Trainer(task_name)
# step 5-2: build forward graph with backbone and task head
loss_var = trainer.build_forward(ernie, mrc_head)
# step 6-1*: use warmup
n_steps = mrc_reader.num_examples * num_epochs // batch_size
warmup_steps = int(0.1 * n_steps)
sched = palm.lr_sched.TriangularSchedualer(warmup_steps, n_steps)
# step 6-2: create a optimizer
adam = palm.optimizer.Adam(loss_var, lr, sched)
# step 6-3: build backward
trainer.build_backward(optimizer=adam, weight_decay=weight_decay)
# step 7: fit prepared reader and data
trainer.fit_reader(mrc_reader)
# step 8-1*: load pretrained parameters
trainer.load_pretrain(pre_params)
# step 8-2*: set saver to save model
# save_steps = (n_steps-8) // gpu_dev_count // 4
save_steps = 1520
trainer.set_saver(save_path=save_path, save_steps=save_steps, save_type=save_type)
# step 8-3: start training
trainer.train(print_steps=print_steps)
# ----------------------- for prediction -----------------------
# step 1-1: create readers for prediction
predict_mrc_reader = palm.reader.MRCReader(vocab_path, max_seqlen, max_query_len, doc_stride, do_lower_case=do_lower_case, phase='predict')
# step 1-2: load the training data
predict_mrc_reader.load_data(predict_file, batch_size)
# step 2: create a backbone of the model to extract text features
pred_ernie = palm.backbone.ERNIE.from_config(config, phase='predict')
# step 3: register the backbone in reader
predict_mrc_reader.register_with(pred_ernie)
# step 4: create the task output head
mrc_pred_head = palm.head.MRC(max_query_len, config['hidden_size'], do_lower_case=do_lower_case, max_ans_len=max_ans_len, phase='predict')
# step 5: build forward graph with backbone and task head
trainer.build_predict_forward(pred_ernie, mrc_pred_head)
# step 6: load pretrained model
pred_model_path = './outputs/ckpt.step'+str(12160)
pred_ckpt = trainer.load_ckpt(pred_model_path)
# step 7: fit prepared reader and data
trainer.fit_reader(predict_mrc_reader, phase='predict')
# step 8: predict
print('predicting..')
trainer.predict(print_steps=print_steps, output_dir="outputs/")
## Examples 5: Predict(Classification)
This task is a sentiment analysis task. The following sections detail model preparation, dataset preparation, and how to run the task.
### Step 1: Prepare Pre-trained Models & Datasets
#### Pre-trianed Model
The pre-training model of this mission is: [ernie-zh-base](https://github.com/PaddlePaddle/PALM/tree/r0.3-api).
Make sure you have downloaded the required pre-training model in the current folder.
#### Dataset
This task uses the `chnsenticorp` dataset.
Download dataset:
```shell
python download.py
```
If everything goes well, there will be a folder named `data/` created with all the datas in it.
The data should have 2 fields, `label text_a`, with tsv format. Here is some example datas:
```
label text_a
0 当当网名不符实,订货多日不见送货,询问客服只会推托,只会要求用户再下订单。如此服务留不住顾客的。去别的网站买书服务更好。
0 XP的驱动不好找!我的17号提的货,现在就降价了100元,而且还送杀毒软件!
1 <荐书> 推荐所有喜欢<红楼>的红迷们一定要收藏这本书,要知道当年我听说这本书的时候花很长时间去图书馆找和借都没能如愿,所以这次一看到当当有,马上买了,红迷们也要记得备货哦!
```
### Step 2: Predict
The code used to perform classification task is in `run.py`. If you have prepared the pre-training model and the data set required for the task, run:
```shell
python run.py
```
If you want to specify a specific gpu or use multiple gpus for predict, please use **`CUDA_VISIBLE_DEVICES`**, for example:
```shell
CUDA_VISIBLE_DEVICES=0,1,2 python run.py
```
Some logs will be shown below:
```
step 1/154, speed: 0.51 steps/s
step 2/154, speed: 3.36 steps/s
step 3/154, speed: 3.48 steps/s
```
After the run, you can view the predictions in the `outputs/predict` folder. Here are some examples of predictions:
```
{"index": 0, "logits": [-0.2014336884021759, 0.6799028515815735], "probs": [0.29290086030960083, 0.7070990800857544], "label": 1}
{"index": 1, "logits": [0.8593899011611938, -0.29743513464927673], "probs": [0.7607553601264954, 0.23924466967582703], "label": 0}
{"index": 2, "logits": [0.7462944388389587, -0.7083730101585388], "probs": [0.8107157349586487, 0.18928426504135132], "label": 0}
```
### Step 3: Evaluate
Once you have the prediction, you can run the evaluation script to evaluate the model:
```shell
python evaluate.py
```
The evaluation results are as follows: (need to update)
```
precision: 0.956666666667, recall: 0.949013157895, f1: 0.95688225039
```
# -*- coding: utf-8 -*-
import os
import requests
import tarfile
import shutil
from tqdm import tqdm
def download(src, url):
file_size = int(requests.head(url).headers['Content-Length'])
header = {
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/'
'70.0.3538.67 Safari/537.36'
}
pbar = tqdm(total=file_size)
resp = requests.get(url, headers=header, stream=True)
with open(src, 'ab') as f:
for chunk in resp.iter_content(chunk_size=1024):
if chunk:
f.write(chunk)
pbar.update(1024)
pbar.close()
return file_size
abs_path = os.path.abspath(__file__)
download_url = "https://ernie.bj.bcebos.com/task_data_zh.tgz"
downlaod_path = os.path.join(os.path.dirname(abs_path), "task_data_zh.tgz")
target_dir = os.path.dirname(abs_path)
download(downlaod_path, download_url)
tar = tarfile.open(downlaod_path)
tar.extractall(target_dir)
os.remove(downlaod_path)
abs_path = os.path.abspath(__file__)
dst_dir = os.path.join(os.path.dirname(abs_path), "data")
if not os.path.exists(dst_dir) or not os.path.isdir(dst_dir):
os.makedirs(dst_dir)
for file in os.listdir(os.path.join(target_dir, 'task_data', 'chnsenticorp')):
shutil.move(os.path.join(target_dir, 'task_data', 'chnsenticorp', file), dst_dir)
shutil.rmtree(os.path.join(target_dir, 'task_data'))
# -*- coding: utf-8 -*-
import json
import numpy as np
def accuracy(preds, labels):
preds = np.array(preds)
labels = np.array(labels)
return (preds == labels).mean()
def f1(preds, labels):
preds = np.array(preds)
labels = np.array(labels)
tp = np.sum((labels == '1') & (preds == '1'))
tn = np.sum((labels == '0') & (preds == '0'))
fp = np.sum((labels == '0') & (preds == '1'))
fn = np.sum((labels == '1') & (preds == '0'))
p = tp * 1.0 / (tp + fp)
r = tp * 1.0 / (tp + fn) * 1.0
f1 = (2 * p * r) / (p + r + 1e-8)
return f1
def recall(preds, labels):
preds = np.array(preds)
labels = np.array(labels)
# recall=TP/(TP+FN)
tp = np.sum((labels == '1') & (preds == '1'))
fn = np.sum((labels == '1') & (preds == '0'))
re = tp * 1.0 / (tp + fn)
return re
def res_evaluate(res_dir="./outputs/predict/predictions.json", eval_phase='test'):
if eval_phase == 'test':
data_dir="./data/test.tsv"
elif eval_phase == 'dev':
data_dir="./data/dev.tsv"
else:
assert eval_phase in ['dev', 'test'], 'eval_phase should be dev or test'
labels = []
with open(data_dir, "r") as file:
first_flag = True
for line in file:
line = line.split("\t")
label = line[0]
if label=='label':
continue
labels.append(str(label))
file.close()
preds = []
with open(res_dir, "r") as file:
for line in file.readlines():
line = json.loads(line)
pred = line['label']
preds.append(str(pred))
file.close()
assert len(labels) == len(preds), "prediction result doesn't match to labels"
print('data num: {}'.format(len(labels)))
print("precision: {}, recall: {}, f1: {}".format(accuracy(preds, labels), recall(preds, labels), f1(preds, labels)))
res_evaluate()
# coding=utf-8
import paddlepalm as palm
import json
from paddlepalm.distribute import gpu_dev_count
if __name__ == '__main__':
# configs
max_seqlen = 256
batch_size = 8
vocab_path = './pretrain/ernie-zh-base/vocab.txt'
predict_file = './data/test.tsv'
random_seed = 1
config = json.load(open('./pretrain/ernie-zh-base/ernie_config.json'))
input_dim = config['hidden_size']
num_classes = 2
task_name = 'chnsenticorp'
pred_output = './outputs/predict/'
print_steps = 20
pre_params = './pretrain/ernie-zh-base/params'
# ----------------------- for prediction -----------------------
# step 1-1: create readers for prediction
print('prepare to predict...')
predict_cls_reader = palm.reader.ClassifyReader(vocab_path, max_seqlen, seed=random_seed, phase='predict')
# step 1-2: load the training data
predict_cls_reader.load_data(predict_file, batch_size)
# step 2: create a backbone of the model to extract text features
pred_ernie = palm.backbone.ERNIE.from_config(config, phase='predict')
# step 3: register the backbone in reader
predict_cls_reader.register_with(pred_ernie)
# step 4: create the task output head
cls_pred_head = palm.head.Classify(num_classes, input_dim, phase='predict')
# step 5-1: create a task trainer
trainer = palm.Trainer(task_name)
# step 5-2: build forward graph with backbone and task head
trainer.build_predict_forward(pred_ernie, cls_pred_head)
# step 6: load pretrained model
pred_model = trainer.load_ckpt(pre_params)
# step 7: fit prepared reader and data
trainer.fit_reader(predict_cls_reader, phase='predict')
# step 8: predict
print('predicting..')
trainer.predict(print_steps=print_steps, output_dir=pred_output)
## Examples 3: Tagging
This task is a named entity recognition task. The following sections detail model preparation, dataset preparation, and how to run the task.
### Step 1: Prepare Pre-trained Models & Datasets
#### Pre-trianed Model
The pre-training model of this mission is: [ernie-zh-base](https://github.com/PaddlePaddle/PALM/tree/r0.3-api).
Make sure you have downloaded the required pre-training model in the current folder.
#### Dataset
This task uses the `MSRA-NER(SIGHAN2006)` dataset.
Download dataset:
```shell
python download.py
```
If everything goes well, there will be a folder named `data/` created with all the datas in it.
The data should have 2 fields, `text_a label`, with tsv format. Here is some example datas:
```
text_a label
在 这 里 恕 弟 不 恭 之 罪 , 敢 在 尊 前 一 诤 : 前 人 论 书 , 每 曰 “ 字 字 有 来 历 , 笔 笔 有 出 处 ” , 细 读 公 字 , 何 尝 跳 出 前 人 藩 篱 , 自 隶 变 而 后 , 直 至 明 季 , 兄 有 何 新 出 ? O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O
相 比 之 下 , 青 岛 海 牛 队 和 广 州 松 日 队 的 雨 中 之 战 虽 然 也 是 0 ∶ 0 , 但 乏 善 可 陈 。 O O O O O B-ORG I-ORG I-ORG I-ORG I-ORG O B-ORG I-ORG I-ORG I-ORG I-ORG O O O O O O O O O O O O O O O O O O O
理 由 多 多 , 最 无 奈 的 却 是 : 5 月 恰 逢 双 重 考 试 , 她 攻 读 的 博 士 学 位 论 文 要 通 考 ; 她 任 教 的 两 所 学 校 , 也 要 在 这 段 时 日 大 考 。 O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O
```
### Step 2: Train & Predict
The code used to perform classification task is in `run.py`. If you have prepared the pre-training model and the data set required for the task, run:
```shell
python run.py
```
If you want to specify a specific gpu or use multiple gpus for training, please use **`CUDA_VISIBLE_DEVICES`**, for example:
```shell
CUDA_VISIBLE_DEVICES=0,1,2 python run.py
```
Some logs will be shown below:
```
step 1/652 (epoch 0), loss: 216.002, speed: 0.32 steps/s
step 2/652 (epoch 0), loss: 202.567, speed: 1.28 steps/s
step 3/652 (epoch 0), loss: 170.677, speed: 1.05 steps/s
```
After the run, you can view the saved models in the `outputs/` folder and the predictions in the `outputs/predict` folder. Here are some examples of predictions:
```
[6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 4, 4, 6, 4, 4, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6]
[6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6]
[6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6]
```
### Step 3: Evaluate
Once you have the prediction, you can run the evaluation script to evaluate the model:
```python
python evaluate.py
```
The evaluation results are as follows:
```
precision: 0.948718989809, recall: 0.944806113784, f1: 0.946758508914
```
# -*- coding: utf-8 -*-
import os
import requests
import tarfile
import shutil
from tqdm import tqdm
def download(src, url):
file_size = int(requests.head(url).headers['Content-Length'])
header = {
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/'
'70.0.3538.67 Safari/537.36'
}
pbar = tqdm(total=file_size)
resp = requests.get(url, headers=header, stream=True)
with open(src, 'ab') as f:
for chunk in resp.iter_content(chunk_size=1024):
if chunk:
f.write(chunk)
pbar.update(1024)
pbar.close()
return file_size
abs_path = os.path.abspath(__file__)
download_url = "https://ernie.bj.bcebos.com/task_data_zh.tgz"
downlaod_path = os.path.join(os.path.dirname(abs_path), "task_data_zh.tgz")
target_dir = os.path.dirname(abs_path)
download(downlaod_path, download_url)
tar = tarfile.open(downlaod_path)
tar.extractall(target_dir)
os.remove(downlaod_path)
abs_path = os.path.abspath(__file__)
dst_dir = os.path.join(os.path.dirname(abs_path), "data")
if not os.path.exists(dst_dir) or not os.path.isdir(dst_dir):
os.makedirs(dst_dir)
for file in os.listdir(os.path.join(target_dir, 'task_data', 'msra_ner')):
shutil.move(os.path.join(target_dir, 'task_data', 'msra_ner', file), dst_dir)
shutil.rmtree(os.path.join(target_dir, 'task_data'))
# -*- coding: utf-8 -*-
import json
def load_label_map(map_dir="./data/label_map.json"):
"""
:param map_dir: dict indictuing chunk type
:return:
"""
return json.load(open(map_dir, "r"))
def cal_chunk(total_res, total_label):
assert len(total_label) == len(total_res), 'prediction result doesn\'t match to labels'
num_labels = 0
num_corr = 0
num_infers = 0
for res, label in zip(total_res, total_label):
assert len(res) == len(label), "prediction result doesn\'t match to labels"
num_labels += sum([0 if i == 6 else 1 for i in label])
num_corr += sum([1 if label[i] == res[i] and label[i] != 6 else 0 for i in range(len(label))])
num_infers += sum([0 if i == 6 else 1 for i in res])
precision = num_corr * 1.0 / num_infers if num_infers > 0 else 0.0
recall = num_corr * 1.0 / num_labels if num_labels > 0 else 0.0
f1 = 2 * precision * recall / (precision + recall) if precision + recall > 0 else 0.0
return precision, recall, f1
def res_evaluate(res_dir="./outputs/predict/predictions.json", data_dir="./data/test.tsv"):
label_map = load_label_map()
total_label = []
with open(data_dir, "r") as file:
first_flag = True
for line in file:
if first_flag:
first_flag = False
continue
line = line.strip("\n")
if len(line) == 0:
continue
line = line.split("\t")
if len(line) < 2:
continue
labels = line[1].split("\x02")
total_label.append(labels)
total_label = [[label_map[j] for j in i] for i in total_label]
total_res = []
with open(res_dir, "r") as file:
cnt = 0
for line in file:
line = line.strip("\n")
if len(line) == 0:
continue
try:
res_arr = json.loads(line)
if len(total_label[cnt]) < len(res_arr):
total_res.append(res_arr[1: 1 + len(total_label[cnt])])
elif len(total_label[cnt]) == len(res_arr):
total_res.append(res_arr)
else:
total_res.append(res_arr)
total_label[cnt] = total_label[cnt][: len(res_arr)]
except:
print("json format error: {}".format(cnt))
print(line)
cnt += 1
precision, recall, f1 = cal_chunk(total_res, total_label)
print("precision: {}, recall: {}, f1: {}".format(precision, recall, f1))
res_evaluate()
# coding=utf-8
import paddlepalm as palm
import json
from paddlepalm.distribute import gpu_dev_count
if __name__ == '__main__':
# configs
max_seqlen = 256
batch_size = 16
num_epochs = 6
lr = 5e-5
num_classes = 7
weight_decay = 0.01
dropout_prob = 0.1
vocab_path = './pretrain/ernie-zh-base/vocab.txt'
label_map = './data/label_map.json'
random_seed = 1
train_file = './data/train.tsv'
predict_file = './data/test.tsv'
save_path='./outputs/'
save_type='ckpt'
pre_params = './pretrain/ernie-zh-base/params'
config = json.load(open('./pretrain/ernie-zh-base/ernie_config.json'))
input_dim = config['hidden_size']
task_name = 'msra_ner'
pred_output = './outputs/predict/'
train_print_steps = 10
pred_print_steps = 20
# ----------------------- for training -----------------------
# step 1-1: create readers for training
ner_reader = palm.reader.SequenceLabelReader(vocab_path, max_seqlen, label_map, seed=random_seed)
# step 1-2: load the training data
ner_reader.load_data(train_file, file_format='tsv', num_epochs=num_epochs, batch_size=batch_size)
# step 2: create a backbone of the model to extract text features
ernie = palm.backbone.ERNIE.from_config(config)
# step 3: register the backbone in reader
ner_reader.register_with(ernie)
# step 4: create the task output head
ner_head = palm.head.SequenceLabel(num_classes, input_dim, dropout_prob)
# step 5-1: create a task trainer
trainer = palm.Trainer(task_name)
# step 5-2: build forward graph with backbone and task head
loss_var = trainer.build_forward(ernie, ner_head)
# step 6-1*: use warmup
n_steps = ner_reader.num_examples * num_epochs // batch_size
warmup_steps = int(0.1 * n_steps)
print('total_steps: {}'.format(n_steps))
print('warmup_steps: {}'.format(warmup_steps))
sched = palm.lr_sched.TriangularSchedualer(warmup_steps, n_steps)
# step 6-2: create a optimizer
adam = palm.optimizer.Adam(loss_var, lr, sched)
# step 6-3: build backward
trainer.build_backward(optimizer=adam, weight_decay=weight_decay)
# step 7: fit prepared reader and data
trainer.fit_reader(ner_reader)
# step 8-1*: load pretrained parameters
trainer.load_pretrain(pre_params)
# step 8-2*: set saver to save model
save_steps = (n_steps-20)// gpu_dev_count
print('save_steps: {}'.format(save_steps))
trainer.set_saver(save_path=save_path, save_steps=save_steps, save_type=save_type)
# step 8-3: start training
trainer.train(print_steps=train_print_steps)
# ----------------------- for prediction -----------------------
# step 1-1: create readers for prediction
print('prepare to predict...')
predict_ner_reader = palm.reader.SequenceLabelReader(vocab_path, max_seqlen, label_map, phase='predict')
# step 1-2: load the training data
predict_ner_reader.load_data(predict_file, batch_size)
# step 2: create a backbone of the model to extract text features
pred_ernie = palm.backbone.ERNIE.from_config(config, phase='predict')
# step 3: register the backbone in reader
predict_ner_reader.register_with(pred_ernie)
# step 4: create the task output head
ner_pred_head = palm.head.SequenceLabel(num_classes, input_dim, phase='predict')
# step 5: build forward graph with backbone and task head
trainer.build_predict_forward(pred_ernie, ner_pred_head)
# step 6: load pretrained model
pred_model_path = './outputs/ckpt.step' + str(save_steps)
pred_ckpt = trainer.load_ckpt(pred_model_path)
# step 7: fit prepared reader and data
trainer.fit_reader(predict_ner_reader, phase='predict')
# step 8: predict
print('predicting..')
trainer.predict(print_steps=pred_print_steps, output_dir=pred_output)
# -*- coding: UTF-8 -*-
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""v1.1"""
class reader(object):
"""interface of data manager."""
def __init__(self, config):
assert isinstance(config, dict)
# @property
# def inputs_attr(self):
# """描述reader输入对象的属性,包含各个对象的名字、shape以及数据类型。当某个对象为标量数据类型(如str, int, float等)时,shape设置为空列表[],当某个对象的某个维度长度可变时,shape中的相应维度设置为-1.
# Return:
# dict类型。对各个输入对象的属性描述。例如,
# 对于文本分类任务,可能需要包含输入文本和所属标签的id
# {"text": ([], 'str'),
# "label": ([], 'int')}
# 对于标注任务,可能需要输入词序列和对应的标签
# {"tokens", ([-1], 'str'),
# "tags", ([-1], 'str')}
# 对于机器阅读理解任务,可能需要包含上下文、问题、回答、答案区域的起止位置等
# {"paragraph", ([], 'str'),
# "question", ([], 'str'),
# "start_position", ([], 'int')
# """
# raise NotImplementedError()
@property
def outputs_attr(self):
"""描述reader输出对象(被yield出的对象)的属性,包含各个对象的名字、shape以及数据类型。当某个对象为标量数据类型(如str, int, float等)时,shape设置为空列表[],当某个对象的某个维度长度可变时,shape中的相应维度设置为-1。
注意:当使用mini-batch梯度下降学习策略时,,应为常规的输入对象设置batch_size维度(一般为-1)
Return:
dict类型。对各个输入对象的属性描述。例如,
对于文本分类和匹配任务,yield的输出内容可能包含如下的对象(下游backbone和task可按需访问其中的对象)
{"token_ids": ([-1, max_len], 'int64'),
"input_ids": ([-1, max_len], 'int64'),
"segment_ids": ([-1, max_len], 'int64'),
"input_mask": ([-1, max_len], 'float32'),
"label": ([-1], 'int')}
"""
raise NotImplementedError()
# def parse_line(self):
# """框架内部使用字典描述每个样本,字典的key为inputs_attr,value为每个input对应的符合attr描述的值。
# 该函数负责将文本行解析成符合inputs_attr描述的字典类型的样本。默认的parse_line方法会读取json格式的数据集文件,数据集的每一行为json格式描述的样本。
# 用户可通过对该方法的继承改写来适配不同格式的数据集,例如csv格式甚至tfrecord文件。
# """
# raise NotImplementedError()
#
# def tokenize(self, line):
# """框架中内置了word piece tokenizer等分词器,用户可通过修改tokenizer超参数来制定使用的分词器,若内置的分词器均无法满足需求,用户可通过对该方法的继承改写来自定义分词器。
# Args:
# - line: a unicode string.
# Return:
# a list of tokens
# """
# raise NotImplementedError()
def iterator(self):
"""数据集遍历接口,注意,当数据集遍历到尾部时该接口应自动完成指针重置,即重新从数据集头部开始新的遍历。
Yield:
(dict) elements that meet the requirements in output_templete
"""
raise NotImplementedError()
@property
def num_examples(self):
"""数据集中的样本数量,即每个epoch中iterator所生成的样本数。注意,使用滑动窗口等可能导致数据集样本数发生变化的策略时,该接口应返回runtime阶段的实际样本数。"""
raise NotImplementedError()
class backbone(object):
"""interface of backbone model."""
def __init__(self, config, phase):
"""
Args:
config: dict类型。描述了 多任务配置文件+预训练模型配置文件 中定义超参数
phase: str类型。运行阶段,目前支持train和predict
"""
assert isinstance(config, dict)
@property
def inputs_attr(self):
"""描述backbone从reader处需要得到的输入对象的属性,包含各个对象的名字、shape以及数据类型。当某个对象为标量数据类型(如str, int, float等)时,shape设置为空列表[],当某个对象的某个维度长度可变时,shape中的相应维度设置为-1。
Return:
dict类型。对各个输入对象的属性描述。例如,
对于文本分类和匹配任务,bert backbone依赖的reader对象主要包含如下的对象
{"token_ids": ([-1, max_len], 'int64'),
"input_ids": ([-1, max_len], 'int64'),
"segment_ids": ([-1, max_len], 'int64'),
"input_mask": ([-1, max_len], 'float32')}"""
raise NotImplementedError()
@property
def outputs_attr(self):
"""描述backbone输出对象的属性,包含各个对象的名字、shape以及数据类型。当某个对象为标量数据类型(如str, int, float等)时,shape设置为空列表[],当某个对象的某个维度长度可变时,shape中的相应维度设置为-1。
Return:
dict类型。对各个输出对象的属性描述。例如,
对于文本分类和匹配任务,bert backbone的输出内容可能包含如下的对象
{"word_emb": ([-1, max_seqlen, word_emb_size], 'float32'),
"sentence_emb": ([-1, hidden_size], 'float32'),
"sim_vec": ([-1, hidden_size], 'float32')}"""
raise NotImplementedError()
def build(self, inputs):
"""建立backbone的计算图。将符合inputs_attr描述的静态图Variable输入映射成符合outputs_attr描述的静态图Variable输出。
Args:
inputs: dict类型。字典中包含inputs_attr中的对象名到计算图Variable的映射,inputs中至少会包含inputs_attr中定义的对象
Return:
需要输出的计算图变量,输出对象会被加入到fetch_list中,从而在每个训练/推理step时得到runtime的计算结果,该计算结果会被传入postprocess方法中供用户处理。
"""
raise NotImplementedError()
class task_paradigm(object):
def __init__(self, config, phase, backbone_config):
"""
config: dict类型。描述了 任务实例(task instance)+多任务配置文件 中定义超参数
phase: str类型。运行阶段,目前支持train和predict
"""
@property
def inputs_attrs(self):
"""描述task_layer需要从reader, backbone等输入对象集合所读取到的输入对象的属性,第一级key为对象集和的名字,如backbone,reader等(后续会支持更灵活的输入),第二级key为对象集和中各对象的属性,包括对象的名字,shape和dtype。当某个对象为标量数据类型(如str, int, float等)时,shape设置为空列表[],当某个对象的某个维度长度可变时,shape中的相应维度设置为-1。
Return:
dict类型。对各个对象集及其输入对象的属性描述。"""
raise NotImplementedError()
@property
def outputs_attr(self):
"""描述task输出对象的属性,包括对象的名字,shape和dtype。输出对象会被加入到fetch_list中,从而在每个训练/推理step时得到runtime的计算结果,该计算结果会被传入postprocess方法中供用户处理。
当某个对象为标量数据类型(如str, int, float等)时,shape设置为空列表[],当某个对象的某个维度长度可变时,shape中的相应维度设置为-1。
Return:
dict类型。对各个输入对象的属性描述。注意,训练阶段必须包含名为loss的输出对象。
"""
raise NotImplementedError()
@property
def epoch_inputs_attrs(self):
return {}
def build(self, inputs, scope_name=""):
"""建立task_layer的计算图。将符合inputs_attrs描述的来自各个对象集的静态图Variables映射成符合outputs_attr描述的静态图Variable输出。
Args:
inputs: dict类型。字典中包含inputs_attrs中的对象名到计算图Variable的映射,inputs中至少会包含inputs_attr中定义的对象
Return:
需要输出的计算图变量,输出对象会被加入到fetch_list中,从而在每个训练/推理step时得到runtime的计算结果,该计算结果会被传入postprocess方法中供用户处理。
"""
raise NotImplementedError()
def postprocess(self, rt_outputs):
"""每个训练或推理step后针对当前batch的task_layer的runtime计算结果进行相关后处理。注意,rt_outputs除了包含build方法,还自动包含了loss的计算结果。"""
pass
def epoch_postprocess(self, post_inputs):
pass
...@@ -9,6 +9,7 @@ import head ...@@ -9,6 +9,7 @@ import head
from trainer import Trainer from trainer import Trainer
from multihead_trainer import MultiHeadTrainer
del interface del interface
del task_instance del task_instance
......
...@@ -31,16 +31,21 @@ __all__ = ["download", "ls"] ...@@ -31,16 +31,21 @@ __all__ = ["download", "ls"]
ssl._create_default_https_context = ssl._create_unverified_context ssl._create_default_https_context = ssl._create_unverified_context
_items = { _items = {
'pretrain': {'ernie-en-uncased-large': 'https://ernie.bj.bcebos.com/ERNIE_Large_en_stable-2.0.0.tar.gz', 'pretrain': {'ernie-en-large': 'https://ernie.bj.bcebos.com/ERNIE_Large_en_stable-2.0.0.tar.gz',
'ernie-en-base': 'https://ernie.bj.bcebos.com/ERNIE_Base_en_stable-2.0.0.tar.gz',
'ernie-zh-base':'https://ernie.bj.bcebos.com/ERNIE_1.0_max-len-512.tar.gz',
'bert-en-uncased-large': 'https://bert-models.bj.bcebos.com/uncased_L-24_H-1024_A-16.tar.gz', 'bert-en-uncased-large': 'https://bert-models.bj.bcebos.com/uncased_L-24_H-1024_A-16.tar.gz',
'bert-en-uncased-base': 'https://bert-models.bj.bcebos.com/uncased_L-12_H-768_A-12.tar.gz', 'bert-en-uncased-base': 'https://bert-models.bj.bcebos.com/uncased_L-12_H-768_A-12.tar.gz',
'roberta-zh-base': 'https://bert-models.bj.bcebos.com/chinese_roberta_wwm_ext_L-12_H-768_A-12.tar.gz',
'roberta-zh-large': 'https://bert-models.bj.bcebos.com/chinese_roberta_wwm_large_ext_L-24_H-1024_A-16.tar.gz',
'utils': None}, 'utils': None},
'reader': {'utils': None}, 'vocab': {'utils': None},
'backbone': {'utils': None}, 'backbone': {'utils': None},
'tasktype': {'utils': None}, 'head': {'utils': None},
'reader': {'utils': None},
} }
def _download(item, scope, path, silent=False): def _download(item, scope, path, silent=False, convert=False):
data_url = _items[item][scope] data_url = _items[item][scope]
if data_url == None: if data_url == None:
return return
...@@ -100,9 +105,10 @@ def _download(item, scope, path, silent=False): ...@@ -100,9 +105,10 @@ def _download(item, scope, path, silent=False):
os.removedirs(source_path) os.removedirs(source_path)
if not silent: if not silent:
print ('done!') print ('done!')
if not silent: if convert:
print ('Converting params...', end=" ") if not silent:
_convert(data_dir, silent) print ('Converting params...', end=" ")
_convert(data_dir, silent)
if not silent: if not silent:
print ('done!') print ('done!')
...@@ -128,6 +134,13 @@ def _convert(path, silent=False): ...@@ -128,6 +134,13 @@ def _convert(path, silent=False):
os.removedirs(path + '/params1/') os.removedirs(path + '/params1/')
def download(item, scope='all', path='.'): def download(item, scope='all', path='.'):
"""download an item. The available scopes and contained items can be showed with `paddlepalm.downloader.ls`.
Args:
scope: the scope the item belongs to.
item: the item to download.
path: the target dir to download to. Default is `.`, means current dir.
"""
item = item.lower() item = item.lower()
scope = scope.lower() scope = scope.lower()
assert item in _items, '{} is not found. Support list: {}'.format(item, list(_items.keys())) assert item in _items, '{} is not found. Support list: {}'.format(item, list(_items.keys()))
...@@ -167,5 +180,4 @@ def ls(item='all', scope='all'): ...@@ -167,5 +180,4 @@ def ls(item='all', scope='all'):
print ('Available {} items: '.format(i)) print ('Available {} items: '.format(i))
_ls(i, scope, l) _ls(i, scope, l)
...@@ -15,7 +15,7 @@ ...@@ -15,7 +15,7 @@
"""v1.1""" """v1.1"""
class BaseBackbone(object): class Backbone(object):
"""interface of backbone model.""" """interface of backbone model."""
def __init__(self, config, phase): def __init__(self, config, phase):
...@@ -58,52 +58,3 @@ class BaseBackbone(object): ...@@ -58,52 +58,3 @@ class BaseBackbone(object):
""" """
raise NotImplementedError() raise NotImplementedError()
class task_paradigm(object):
def __init__(self, config, phase, backbone_config):
"""
config: dict类型。描述了 任务实例(task instance)+多任务配置文件 中定义超参数
phase: str类型。运行阶段,目前支持train和predict
"""
@property
def inputs_attrs(self):
"""描述task_layer需要从reader, backbone等输入对象集合所读取到的输入对象的属性,第一级key为对象集和的名字,如backbone,reader等(后续会支持更灵活的输入),第二级key为对象集和中各对象的属性,包括对象的名字,shape和dtype。当某个对象为标量数据类型(如str, int, float等)时,shape设置为空列表[],当某个对象的某个维度长度可变时,shape中的相应维度设置为-1。
Return:
dict类型。对各个对象集及其输入对象的属性描述。"""
raise NotImplementedError()
@property
def outputs_attr(self):
"""描述task输出对象的属性,包括对象的名字,shape和dtype。输出对象会被加入到fetch_list中,从而在每个训练/推理step时得到runtime的计算结果,该计算结果会被传入postprocess方法中供用户处理。
当某个对象为标量数据类型(如str, int, float等)时,shape设置为空列表[],当某个对象的某个维度长度可变时,shape中的相应维度设置为-1。
Return:
dict类型。对各个输入对象的属性描述。注意,训练阶段必须包含名为loss的输出对象。
"""
raise NotImplementedError()
@property
def epoch_inputs_attrs(self):
return {}
def build(self, inputs, scope_name=""):
"""建立task_layer的计算图。将符合inputs_attrs描述的来自各个对象集的静态图Variables映射成符合outputs_attr描述的静态图Variable输出。
Args:
inputs: dict类型。字典中包含inputs_attrs中的对象名到计算图Variable的映射,inputs中至少会包含inputs_attr中定义的对象
Return:
需要输出的计算图变量,输出对象会被加入到fetch_list中,从而在每个训练/推理step时得到runtime的计算结果,该计算结果会被传入postprocess方法中供用户处理。
"""
raise NotImplementedError()
def postprocess(self, rt_outputs):
"""每个训练或推理step后针对当前batch的task_layer的runtime计算结果进行相关后处理。注意,rt_outputs除了包含build方法,还自动包含了loss的计算结果。"""
pass
def epoch_postprocess(self, post_inputs):
pass
...@@ -23,28 +23,37 @@ from paddle import fluid ...@@ -23,28 +23,37 @@ from paddle import fluid
from paddle.fluid import layers from paddle.fluid import layers
from paddlepalm.backbone.utils.transformer import pre_process_layer, encoder from paddlepalm.backbone.utils.transformer import pre_process_layer, encoder
from paddlepalm.backbone.base_backbone import BaseBackbone from paddlepalm.backbone.base_backbone import Backbone
class BERT(BaseBackbone): class BERT(Backbone):
def __init__(hidden_size, num_hidden_layers, num_attention_heads, vocab_size, \ def __init__(self, hidden_size, num_hidden_layers, num_attention_heads, vocab_size, \
max_position_embeddings, type_vocab_size, hidden_act, hidden_dropout_prob, \ max_position_embeddings, type_vocab_size, hidden_act, hidden_dropout_prob, \
attention_probs_dropout_prob, initializer_range, phase='train'): attention_probs_dropout_prob, initializer_range, is_pairwise=False, phase='train'):
config = {}
config['hidden_size'] = hidden_size self._emb_size = hidden_size
config['num_hidden_layers'] = num_hidden_layers self._n_layer = num_hidden_layers
config['num_attention_heads'] = num_attention_heads self._n_head = num_attention_heads
config['vocab_size'] = vocab_size self._voc_size = vocab_size
config['max_position_embeddings'] = max_position_embeddings self._max_position_seq_len = max_position_embeddings
config['type_vocab_size'] = sent_type_vocab_size self._sent_types = type_vocab_size
config['hidden_act'] = hidden_act
config['hidden_dropout_prob'] = hidden_dropout_prob
config['attention_probs_dropout_prob'] = attention_probs_dropout_prob self._hidden_act = hidden_act
config['initializer_range'] = initializer_range self._prepostprocess_dropout = hidden_dropout_prob
self._attention_dropout = attention_probs_dropout_prob
self.from_config(config, phase=phase)
self._word_emb_name = "word_embedding"
self._pos_emb_name = "pos_embedding"
self._sent_emb_name = "sent_embedding"
self._task_emb_name = "task_embedding"
self._emb_dtype = "float32"
self._phase = phase
self._is_pairwise = is_pairwise
self._param_initializer = fluid.initializer.TruncatedNormal(
scale=initializer_range)
@classmethod @classmethod
def from_config(self, config, phase='train'): def from_config(self, config, phase='train'):
...@@ -62,40 +71,57 @@ class BERT(BaseBackbone): ...@@ -62,40 +71,57 @@ class BERT(BaseBackbone):
"{} is required to initialize ERNIE".format('attention_probs_dropout_prob') "{} is required to initialize ERNIE".format('attention_probs_dropout_prob')
assert 'initializer_range' in config, "{} is required to initialize ERNIE".format('initializer_range') assert 'initializer_range' in config, "{} is required to initialize ERNIE".format('initializer_range')
# self._is_training = phase == 'train' # backbone一般不用关心运行阶段,因为outputs在任何阶段基本不会变 hidden_size = config['hidden_size']
self._emb_size = config["hidden_size"] num_hidden_layers = config['num_hidden_layers']
self._n_layer = config["num_hidden_layers"] num_attention_heads = config['num_attention_heads']
self._n_head = config["num_attention_heads"] vocab_size = config['vocab_size']
self._voc_size = config["vocab_size"] max_position_embeddings = config['max_position_embeddings']
self._max_position_seq_len = config["max_position_embeddings"] if 'sent_type_vocab_size' in config:
self._sent_types = config["type_vocab_size"] sent_type_vocab_size = config['sent_type_vocab_size']
self._hidden_act = config["hidden_act"] else:
self._prepostprocess_dropout = config["hidden_dropout_prob"] sent_type_vocab_size = config['type_vocab_size']
self._attention_dropout = config["attention_probs_dropout_prob"]
hidden_act = config['hidden_act']
self._word_emb_name = "word_embedding" hidden_dropout_prob = config['hidden_dropout_prob']
self._pos_emb_name = "pos_embedding" attention_probs_dropout_prob = config['attention_probs_dropout_prob']
self._sent_emb_name = "sent_embedding" initializer_range = config['initializer_range']
if 'is_pairwise' in config:
# Initialize all weigths by truncated normal initializer, and all biases is_pairwise = config['is_pairwise']
# will be initialized by constant zero by default. else:
self._param_initializer = fluid.initializer.TruncatedNormal( is_pairwise = False
scale=config["initializer_range"])
return self(hidden_size, num_hidden_layers, num_attention_heads, vocab_size, \
max_position_embeddings, sent_type_vocab_size, \
hidden_act, hidden_dropout_prob, attention_probs_dropout_prob, initializer_range, is_pairwise, phase)
@property @property
def inputs_attr(self): def inputs_attr(self):
return {"token_ids": [[-1, -1], 'int64'], ret = {"token_ids": [[-1, -1], 'int64'],
"position_ids": [[-1, -1], 'int64'], "position_ids": [[-1, -1], 'int64'],
"segment_ids": [[-1, -1], 'int64'], "segment_ids": [[-1, -1], 'int64'],
"input_mask": [[-1, -1, 1], 'float32']} "input_mask": [[-1, -1, 1], 'float32'],
}
if self._is_pairwise and self._phase=='train':
ret.update({"token_ids_neg": [[-1, -1], 'int64'],
"position_ids_neg": [[-1, -1], 'int64'],
"segment_ids_neg": [[-1, -1], 'int64'],
"input_mask_neg": [[-1, -1, 1], 'float32'],
})
return ret
@property @property
def outputs_attr(self): def outputs_attr(self):
return {"word_embedding": [[-1, -1, self._emb_size], 'float32'], ret = {"word_embedding": [[-1, -1, self._emb_size], 'float32'],
"embedding_table": [[-1, self._voc_size, self._emb_size], 'float32'], "embedding_table": [[-1, self._voc_size, self._emb_size], 'float32'],
"encoder_outputs": [[-1, -1, self._emb_size], 'float32'], "encoder_outputs": [[-1, -1, self._emb_size], 'float32'],
"sentence_embedding": [[-1, self._emb_size], 'float32'], "sentence_embedding": [[-1, self._emb_size], 'float32'],
"sentence_pair_embedding": [[-1, self._emb_size], 'float32']} "sentence_pair_embedding": [[-1, self._emb_size], 'float32']}
if self._is_pairwise and self._phase == 'train':
ret.update({"word_embedding_neg": [[-1, -1, self._emb_size], 'float32'],
"encoder_outputs_neg": [[-1, -1, self._emb_size], 'float32'],
"sentence_embedding_neg": [[-1, self._emb_size], 'float32'],
"sentence_pair_embedding_neg": [[-1, self._emb_size], 'float32']})
return ret
def build(self, inputs, scope_name=""): def build(self, inputs, scope_name=""):
src_ids = inputs['token_ids'] src_ids = inputs['token_ids']
...@@ -104,83 +130,111 @@ class BERT(BaseBackbone): ...@@ -104,83 +130,111 @@ class BERT(BaseBackbone):
input_mask = inputs['input_mask'] input_mask = inputs['input_mask']
self._emb_dtype = 'float32' self._emb_dtype = 'float32'
# padding id in vocabulary must be set to 0
emb_out = fluid.embedding(
input=src_ids,
size=[self._voc_size, self._emb_size],
dtype=self._emb_dtype,
param_attr=fluid.ParamAttr(
name=scope_name+self._word_emb_name, initializer=self._param_initializer),
is_sparse=False)
# fluid.global_scope().find_var('backbone-word_embedding').get_tensor()
embedding_table = fluid.default_main_program().global_block().var(scope_name+self._word_emb_name)
position_emb_out = fluid.embedding(
input=pos_ids,
size=[self._max_position_seq_len, self._emb_size],
dtype=self._emb_dtype,
param_attr=fluid.ParamAttr(
name=scope_name+self._pos_emb_name, initializer=self._param_initializer))
sent_emb_out = fluid.embedding(
sent_ids,
size=[self._sent_types, self._emb_size],
dtype=self._emb_dtype,
param_attr=fluid.ParamAttr(
name=scope_name+self._sent_emb_name, initializer=self._param_initializer))
emb_out = emb_out + position_emb_out
emb_out = emb_out + sent_emb_out
emb_out = pre_process_layer(
emb_out, 'nd', self._prepostprocess_dropout, name=scope_name+'pre_encoder')
self_attn_mask = fluid.layers.matmul(
x=input_mask, y=input_mask, transpose_y=True)
self_attn_mask = fluid.layers.scale(
x=self_attn_mask, scale=10000.0, bias=-1.0, bias_after_scale=False)
n_head_self_attn_mask = fluid.layers.stack(
x=[self_attn_mask] * self._n_head, axis=1)
n_head_self_attn_mask.stop_gradient = True
enc_out = encoder(
enc_input=emb_out,
attn_bias=n_head_self_attn_mask,
n_layer=self._n_layer,
n_head=self._n_head,
d_key=self._emb_size // self._n_head,
d_value=self._emb_size // self._n_head,
d_model=self._emb_size,
d_inner_hid=self._emb_size * 4,
prepostprocess_dropout=self._prepostprocess_dropout,
attention_dropout=self._attention_dropout,
relu_dropout=0,
hidden_act=self._hidden_act,
preprocess_cmd="",
postprocess_cmd="dan",
param_initializer=self._param_initializer,
name=scope_name+'encoder')
input_buffer = {}
output_buffer = {}
input_buffer['base'] = [src_ids, pos_ids, sent_ids, input_mask]
output_buffer['base'] = {}
if self._is_pairwise and self._phase =='train':
src_ids = inputs['token_ids_neg']
pos_ids = inputs['position_ids_neg']
sent_ids = inputs['segment_ids_neg']
input_mask = inputs['input_mask_neg']
input_buffer['neg'] = [src_ids, pos_ids, sent_ids, input_mask]
output_buffer['neg'] = {}
next_sent_feat = fluid.layers.slice( for key, (src_ids, pos_ids, sent_ids, input_mask) in input_buffer.items():
input=enc_out, axes=[1], starts=[0], ends=[1]) # padding id in vocabulary must be set to 0
next_sent_feat = fluid.layers.reshape(next_sent_feat, [-1, next_sent_feat.shape[-1]]) emb_out = fluid.embedding(
next_sent_feat = fluid.layers.fc( input=src_ids,
input=next_sent_feat, size=[self._voc_size, self._emb_size],
size=self._emb_size, dtype=self._emb_dtype,
act="tanh", param_attr=fluid.ParamAttr(
param_attr=fluid.ParamAttr( name=scope_name+self._word_emb_name, initializer=self._param_initializer),
name=scope_name+"pooled_fc.w_0", initializer=self._param_initializer), is_sparse=False)
bias_attr=scope_name+"pooled_fc.b_0")
# fluid.global_scope().find_var('backbone-word_embedding').get_tensor()
return {'embedding_table': embedding_table, embedding_table = fluid.default_main_program().global_block().var(scope_name+self._word_emb_name)
'word_embedding': emb_out,
'encoder_outputs': enc_out, position_emb_out = fluid.embedding(
'sentence_embedding': next_sent_feat, input=pos_ids,
'sentence_pair_embedding': next_sent_feat} size=[self._max_position_seq_len, self._emb_size],
dtype=self._emb_dtype,
param_attr=fluid.ParamAttr(
name=scope_name+self._pos_emb_name, initializer=self._param_initializer))
sent_emb_out = fluid.embedding(
sent_ids,
size=[self._sent_types, self._emb_size],
dtype=self._emb_dtype,
param_attr=fluid.ParamAttr(
name=scope_name+self._sent_emb_name, initializer=self._param_initializer))
emb_out = emb_out + position_emb_out
emb_out = emb_out + sent_emb_out
emb_out = pre_process_layer(
emb_out, 'nd', self._prepostprocess_dropout, name=scope_name+'pre_encoder')
self_attn_mask = fluid.layers.matmul(
x=input_mask, y=input_mask, transpose_y=True)
self_attn_mask = fluid.layers.scale(
x=self_attn_mask, scale=10000.0, bias=-1.0, bias_after_scale=False)
n_head_self_attn_mask = fluid.layers.stack(
x=[self_attn_mask] * self._n_head, axis=1)
n_head_self_attn_mask.stop_gradient = True
enc_out = encoder(
enc_input=emb_out,
attn_bias=n_head_self_attn_mask,
n_layer=self._n_layer,
n_head=self._n_head,
d_key=self._emb_size // self._n_head,
d_value=self._emb_size // self._n_head,
d_model=self._emb_size,
d_inner_hid=self._emb_size * 4,
prepostprocess_dropout=self._prepostprocess_dropout,
attention_dropout=self._attention_dropout,
relu_dropout=0,
hidden_act=self._hidden_act,
preprocess_cmd="",
postprocess_cmd="dan",
param_initializer=self._param_initializer,
name=scope_name+'encoder')
next_sent_feat = fluid.layers.slice(
input=enc_out, axes=[1], starts=[0], ends=[1])
next_sent_feat = fluid.layers.reshape(next_sent_feat, [-1, next_sent_feat.shape[-1]])
next_sent_feat = fluid.layers.fc(
input=next_sent_feat,
size=self._emb_size,
act="tanh",
param_attr=fluid.ParamAttr(
name=scope_name+"pooled_fc.w_0", initializer=self._param_initializer),
bias_attr=scope_name+"pooled_fc.b_0")
output_buffer[key]['word_embedding'] = emb_out
output_buffer[key]['encoder_outputs'] = enc_out
output_buffer[key]['sentence_embedding'] = next_sent_feat
output_buffer[key]['sentence_pair_embedding'] = next_sent_feat
ret = {}
ret['embedding_table'] = embedding_table
ret['word_embedding'] = output_buffer['base']['word_embedding']
ret['encoder_outputs'] = output_buffer['base']['encoder_outputs']
ret['sentence_embedding'] = output_buffer['base']['sentence_embedding']
ret['sentence_pair_embedding'] = output_buffer['base']['sentence_pair_embedding']
if self._is_pairwise and self._phase == 'train':
ret['word_embedding_neg'] = output_buffer['neg']['word_embedding']
ret['encoder_outputs_neg'] = output_buffer['neg']['encoder_outputs']
ret['sentence_embedding_neg'] = output_buffer['neg']['sentence_embedding']
ret['sentence_pair_embedding_neg'] = output_buffer['neg']['sentence_pair_embedding']
return ret
def postprocess(self, rt_outputs): def postprocess(self, rt_outputs):
pass pass
......
...@@ -24,17 +24,17 @@ from paddle import fluid ...@@ -24,17 +24,17 @@ from paddle import fluid
from paddle.fluid import layers from paddle.fluid import layers
from paddlepalm.backbone.utils.transformer import pre_process_layer, encoder from paddlepalm.backbone.utils.transformer import pre_process_layer, encoder
from paddlepalm.backbone.base_backbone import BaseBackbone from paddlepalm.backbone.base_backbone import Backbone
class ERNIE(BaseBackbone): class ERNIE(Backbone):
def __init__(self, hidden_size, num_hidden_layers, num_attention_heads, vocab_size, \ def __init__(self, hidden_size, num_hidden_layers, num_attention_heads, vocab_size, \
max_position_embeddings, sent_type_vocab_size, task_type_vocab_size, \ max_position_embeddings, sent_type_vocab_size, task_type_vocab_size, \
hidden_act, hidden_dropout_prob, attention_probs_dropout_prob, initializer_range, phase='train'): hidden_act, hidden_dropout_prob, attention_probs_dropout_prob, initializer_range, is_pairwise=False, phase='train'):
# self._is_training = phase == 'train' # backbone一般不用关心运行阶段,因为outputs在任何阶段基本不会变 # self._is_training = phase == 'train' # backbone一般不用关心运行阶段,因为outputs在任何阶段基本不会变
self._emb_size = hidden_size self._emb_size = hidden_size
self._n_layer = num_hidden_layers self._n_layer = num_hidden_layers
self._n_head = num_attention_heads self._n_head = num_attention_heads
...@@ -53,7 +53,8 @@ class ERNIE(BaseBackbone): ...@@ -53,7 +53,8 @@ class ERNIE(BaseBackbone):
self._sent_emb_name = "sent_embedding" self._sent_emb_name = "sent_embedding"
self._task_emb_name = "task_embedding" self._task_emb_name = "task_embedding"
self._emb_dtype = "float32" self._emb_dtype = "float32"
self._is_pairwise = is_pairwise
self._phase=phase
self._param_initializer = fluid.initializer.TruncatedNormal( self._param_initializer = fluid.initializer.TruncatedNormal(
scale=initializer_range) scale=initializer_range)
...@@ -65,7 +66,7 @@ class ERNIE(BaseBackbone): ...@@ -65,7 +66,7 @@ class ERNIE(BaseBackbone):
assert 'vocab_size' in config, "{} is required to initialize ERNIE".format('vocab_size') assert 'vocab_size' in config, "{} is required to initialize ERNIE".format('vocab_size')
assert 'max_position_embeddings' in config, "{} is required to initialize ERNIE".format('max_position_embeddings') assert 'max_position_embeddings' in config, "{} is required to initialize ERNIE".format('max_position_embeddings')
assert 'sent_type_vocab_size' in config or 'type_vocab_size' in config, "{} is required to initialize ERNIE".format('sent_type_vocab_size') assert 'sent_type_vocab_size' in config or 'type_vocab_size' in config, "{} is required to initialize ERNIE".format('sent_type_vocab_size')
assert 'task_type_vocab_size' in config, "{} is required to initialize ERNIE".format('task_type_vocab_size') # assert 'task_type_vocab_size' in config, "{} is required to initialize ERNIE".format('task_type_vocab_size')
assert 'hidden_act' in config, "{} is required to initialize ERNIE".format('hidden_act') assert 'hidden_act' in config, "{} is required to initialize ERNIE".format('hidden_act')
assert 'hidden_dropout_prob' in config, "{} is required to initialize ERNIE".format('hidden_dropout_prob') assert 'hidden_dropout_prob' in config, "{} is required to initialize ERNIE".format('hidden_dropout_prob')
assert 'attention_probs_dropout_prob' in config, "{} is required to initialize ERNIE".format('attention_probs_dropout_prob') assert 'attention_probs_dropout_prob' in config, "{} is required to initialize ERNIE".format('attention_probs_dropout_prob')
...@@ -80,126 +81,175 @@ class ERNIE(BaseBackbone): ...@@ -80,126 +81,175 @@ class ERNIE(BaseBackbone):
sent_type_vocab_size = config['sent_type_vocab_size'] sent_type_vocab_size = config['sent_type_vocab_size']
else: else:
sent_type_vocab_size = config['type_vocab_size'] sent_type_vocab_size = config['type_vocab_size']
task_type_vocab_size = config['task_type_vocab_size'] if 'task_type_vocab_size' in config:
task_type_vocab_size = config['task_type_vocab_size']
else:
task_type_vocab_size = config['type_vocab_size']
hidden_act = config['hidden_act'] hidden_act = config['hidden_act']
hidden_dropout_prob = config['hidden_dropout_prob'] hidden_dropout_prob = config['hidden_dropout_prob']
attention_probs_dropout_prob = config['attention_probs_dropout_prob'] attention_probs_dropout_prob = config['attention_probs_dropout_prob']
initializer_range = config['initializer_range'] initializer_range = config['initializer_range']
if 'is_pairwise' in config:
is_pairwise = config['is_pairwise']
else:
is_pairwise = False
return cls(hidden_size, num_hidden_layers, num_attention_heads, vocab_size, \ return cls(hidden_size, num_hidden_layers, num_attention_heads, vocab_size, \
max_position_embeddings, sent_type_vocab_size, task_type_vocab_size, \ max_position_embeddings, sent_type_vocab_size, task_type_vocab_size, \
hidden_act, hidden_dropout_prob, attention_probs_dropout_prob, initializer_range, phase=phase) hidden_act, hidden_dropout_prob, attention_probs_dropout_prob, initializer_range, is_pairwise, phase=phase)
@property @property
def inputs_attr(self): def inputs_attr(self):
return {"token_ids": [[-1, -1], 'int64'], ret = {"token_ids": [[-1, -1], 'int64'],
"position_ids": [[-1, -1], 'int64'], "position_ids": [[-1, -1], 'int64'],
"segment_ids": [[-1, -1], 'int64'], "segment_ids": [[-1, -1], 'int64'],
"input_mask": [[-1, -1, 1], 'float32'], "input_mask": [[-1, -1, 1], 'float32'],
"task_ids": [[-1,-1], 'int64']} "task_ids": [[-1,-1], 'int64']}
if self._is_pairwise and self._phase=='train':
ret.update({"token_ids_neg": [[-1, -1], 'int64'],
"position_ids_neg": [[-1, -1], 'int64'],
"segment_ids_neg": [[-1, -1], 'int64'],
"input_mask_neg": [[-1, -1, 1], 'float32'],
"task_ids_neg": [[-1,-1], 'int64']
})
return ret
@property @property
def outputs_attr(self): def outputs_attr(self):
return {"word_embedding": [[-1, -1, self._emb_size], 'float32'], ret = {"word_embedding": [[-1, -1, self._emb_size], 'float32'],
"embedding_table": [[-1, self._voc_size, self._emb_size], 'float32'], "embedding_table": [[-1, self._voc_size, self._emb_size], 'float32'],
"encoder_outputs": [[-1, -1, self._emb_size], 'float32'], "encoder_outputs": [[-1, -1, self._emb_size], 'float32'],
"sentence_embedding": [[-1, self._emb_size], 'float32'], "sentence_embedding": [[-1, self._emb_size], 'float32'],
"sentence_pair_embedding": [[-1, self._emb_size], 'float32']} "sentence_pair_embedding": [[-1, self._emb_size], 'float32']}
if self._is_pairwise and self._phase == 'train':
ret.update({"word_embedding_neg": [[-1, -1, self._emb_size], 'float32'],
"encoder_outputs_neg": [[-1, -1, self._emb_size], 'float32'],
"sentence_embedding_neg": [[-1, self._emb_size], 'float32'],
"sentence_pair_embedding_neg": [[-1, self._emb_size], 'float32']})
return ret
def build(self, inputs, scope_name=""): def build(self, inputs, scope_name=""):
src_ids = inputs['token_ids'] src_ids = inputs['token_ids']
pos_ids = inputs['position_ids'] pos_ids = inputs['position_ids']
sent_ids = inputs['segment_ids'] sent_ids = inputs['segment_ids']
input_mask = inputs['input_mask'] input_mask = inputs['input_mask']
task_ids = inputs['task_ids'] task_ids = inputs['task_ids']
# padding id in vocabulary must be set to 0 input_buffer = {}
emb_out = fluid.embedding( output_buffer = {}
input=src_ids, input_buffer['base'] = [src_ids, pos_ids, sent_ids, input_mask, task_ids]
size=[self._voc_size, self._emb_size], output_buffer['base'] = {}
dtype=self._emb_dtype,
param_attr=fluid.ParamAttr( if self._is_pairwise and self._phase =='train':
name=scope_name+self._word_emb_name, initializer=self._param_initializer), src_ids = inputs['token_ids_neg']
is_sparse=False) pos_ids = inputs['position_ids_neg']
sent_ids = inputs['segment_ids_neg']
# fluid.global_scope().find_var('backbone-word_embedding').get_tensor() input_mask = inputs['input_mask_neg']
embedding_table = fluid.default_main_program().global_block().var(scope_name+self._word_emb_name) task_ids = inputs['task_ids_neg']
input_buffer['neg'] = [src_ids, pos_ids, sent_ids, input_mask, task_ids]
output_buffer['neg'] = {}
for key, (src_ids, pos_ids, sent_ids, input_mask, task_ids) in input_buffer.items():
# padding id in vocabulary must be set to 0
emb_out = fluid.embedding(
input=src_ids,
size=[self._voc_size, self._emb_size],
dtype=self._emb_dtype,
param_attr=fluid.ParamAttr(
name=scope_name+self._word_emb_name, initializer=self._param_initializer),
is_sparse=False)
position_emb_out = fluid.embedding( # fluid.global_scope().find_var('backbone-word_embedding').get_tensor()
input=pos_ids, embedding_table = fluid.default_main_program().global_block().var(scope_name+self._word_emb_name)
size=[self._max_position_seq_len, self._emb_size],
dtype=self._emb_dtype, position_emb_out = fluid.embedding(
param_attr=fluid.ParamAttr( input=pos_ids,
name=scope_name+self._pos_emb_name, initializer=self._param_initializer)) size=[self._max_position_seq_len, self._emb_size],
dtype=self._emb_dtype,
sent_emb_out = fluid.embedding( param_attr=fluid.ParamAttr(
sent_ids, name=scope_name+self._pos_emb_name, initializer=self._param_initializer))
size=[self._sent_types, self._emb_size],
dtype=self._emb_dtype, sent_emb_out = fluid.embedding(
param_attr=fluid.ParamAttr( sent_ids,
name=scope_name+self._sent_emb_name, initializer=self._param_initializer)) size=[self._sent_types, self._emb_size],
dtype=self._emb_dtype,
emb_out = emb_out + position_emb_out param_attr=fluid.ParamAttr(
emb_out = emb_out + sent_emb_out name=scope_name+self._sent_emb_name, initializer=self._param_initializer))
task_emb_out = fluid.embedding( emb_out = emb_out + position_emb_out
task_ids, emb_out = emb_out + sent_emb_out
size=[self._task_types, self._emb_size],
dtype=self._emb_dtype, task_emb_out = fluid.embedding(
param_attr=fluid.ParamAttr( task_ids,
name=scope_name+self._task_emb_name, size=[self._task_types, self._emb_size],
initializer=self._param_initializer)) dtype=self._emb_dtype,
param_attr=fluid.ParamAttr(
emb_out = emb_out + task_emb_out name=scope_name+self._task_emb_name,
initializer=self._param_initializer))
emb_out = pre_process_layer(
emb_out, 'nd', self._prepostprocess_dropout, name=scope_name+'pre_encoder') emb_out = emb_out + task_emb_out
self_attn_mask = fluid.layers.matmul( emb_out = pre_process_layer(
x=input_mask, y=input_mask, transpose_y=True) emb_out, 'nd', self._prepostprocess_dropout, name=scope_name+'pre_encoder')
self_attn_mask = fluid.layers.scale( self_attn_mask = fluid.layers.matmul(
x=self_attn_mask, scale=10000.0, bias=-1.0, bias_after_scale=False) x=input_mask, y=input_mask, transpose_y=True)
n_head_self_attn_mask = fluid.layers.stack(
x=[self_attn_mask] * self._n_head, axis=1) self_attn_mask = fluid.layers.scale(
n_head_self_attn_mask.stop_gradient = True x=self_attn_mask, scale=10000.0, bias=-1.0, bias_after_scale=False)
n_head_self_attn_mask = fluid.layers.stack(
enc_out = encoder( x=[self_attn_mask] * self._n_head, axis=1)
enc_input=emb_out, n_head_self_attn_mask.stop_gradient = True
attn_bias=n_head_self_attn_mask,
n_layer=self._n_layer, enc_out = encoder(
n_head=self._n_head, enc_input=emb_out,
d_key=self._emb_size // self._n_head, attn_bias=n_head_self_attn_mask,
d_value=self._emb_size // self._n_head, n_layer=self._n_layer,
d_model=self._emb_size, n_head=self._n_head,
d_inner_hid=self._emb_size * 4, d_key=self._emb_size // self._n_head,
prepostprocess_dropout=self._prepostprocess_dropout, d_value=self._emb_size // self._n_head,
attention_dropout=self._attention_dropout, d_model=self._emb_size,
relu_dropout=0, d_inner_hid=self._emb_size * 4,
hidden_act=self._hidden_act, prepostprocess_dropout=self._prepostprocess_dropout,
preprocess_cmd="", attention_dropout=self._attention_dropout,
postprocess_cmd="dan", relu_dropout=0,
param_initializer=self._param_initializer, hidden_act=self._hidden_act,
name=scope_name+'encoder') preprocess_cmd="",
postprocess_cmd="dan",
param_initializer=self._param_initializer,
name=scope_name+'encoder')
next_sent_feat = fluid.layers.slice(
input=enc_out, axes=[1], starts=[0], ends=[1])
next_sent_feat = fluid.layers.reshape(next_sent_feat, [-1, next_sent_feat.shape[-1]])
next_sent_feat = fluid.layers.fc(
input=next_sent_feat,
size=self._emb_size,
act="tanh",
param_attr=fluid.ParamAttr(
name=scope_name+"pooled_fc.w_0", initializer=self._param_initializer),
bias_attr=scope_name+"pooled_fc.b_0")
output_buffer[key]['word_embedding'] = emb_out
output_buffer[key]['encoder_outputs'] = enc_out
output_buffer[key]['sentence_embedding'] = next_sent_feat
output_buffer[key]['sentence_pair_embedding'] = next_sent_feat
ret = {}
ret['embedding_table'] = embedding_table
ret['word_embedding'] = output_buffer['base']['word_embedding']
ret['encoder_outputs'] = output_buffer['base']['encoder_outputs']
ret['sentence_embedding'] = output_buffer['base']['sentence_embedding']
ret['sentence_pair_embedding'] = output_buffer['base']['sentence_pair_embedding']
if self._is_pairwise and self._phase == 'train':
ret['word_embedding_neg'] = output_buffer['neg']['word_embedding']
ret['encoder_outputs_neg'] = output_buffer['neg']['encoder_outputs']
ret['sentence_embedding_neg'] = output_buffer['neg']['sentence_embedding']
ret['sentence_pair_embedding_neg'] = output_buffer['neg']['sentence_pair_embedding']
next_sent_feat = fluid.layers.slice( return ret
input=enc_out, axes=[1], starts=[0], ends=[1])
next_sent_feat = fluid.layers.reshape(next_sent_feat, [-1, next_sent_feat.shape[-1]])
next_sent_feat = fluid.layers.fc(
input=next_sent_feat,
size=self._emb_size,
act="tanh",
param_attr=fluid.ParamAttr(
name=scope_name+"pooled_fc.w_0", initializer=self._param_initializer),
bias_attr=scope_name+"pooled_fc.b_0")
return {'embedding_table': embedding_table,
'word_embedding': emb_out,
'encoder_outputs': enc_out,
'sentence_embedding': next_sent_feat,
'sentence_pair_embedding': next_sent_feat}
def postprocess(self, rt_outputs): def postprocess(self, rt_outputs):
pass pass
......
from conf_controller import ConfigController
from controller import Controller
# -*- coding: UTF-8 -*-
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function
import os
import sys
import importlib
import multiprocessing
from paddle import fluid
from paddle.fluid import layers
import yaml
import json
import logging
import time
import numpy as np
from paddlepalm.utils.saver import init_pretraining_params, init_checkpoint
from paddlepalm.utils.config_helper import PDConfig
from paddlepalm.utils.print_helper import print_dict
from paddlepalm.utils.reader_helper import create_net_inputs, create_iterator_fn, create_joint_iterator_fn, merge_input_attrs
from paddlepalm.default_settings import *
from paddlepalm.task_instance import TaskInstance, check_instances
import Queue
from threading import Thread
DEBUG=False
VERBOSE=0
def _get_basename(f):
return os.path.splitext(f)[0]
def _get_suffix(f):
return os.path.splitext(f)[-1]
def _parse_yaml(f, asdict=True, support_cmd_line=False):
assert os.path.exists(f), "file {} not found.".format(f)
if support_cmd_line:
args = PDConfig(yaml_file=f, fuse_args=True)
args.build()
return args.asdict() if asdict else args
else:
if asdict:
with open(f, "r") as fin:
yaml_config = yaml.load(fin, Loader=yaml.SafeLoader)
return yaml_config
else:
raise NotImplementedError()
def _parse_json(f, asdict=True, support_cmd_line=False):
assert os.path.exists(f), "file {} not found.".format(f)
if support_cmd_line:
args = PDConfig(json_file=f, fuse_args=support_cmd_line)
args.build()
return args.asdict() if asdict else args
else:
if asdict:
with open(f, "r") as fin:
config = json.load(fin)
return config
else:
raise NotImplementedError()
def _parse_list(string, astype=str):
assert isinstance(string, str), "{} is not a string.".format(string)
if ',' not in string:
return [astype(string)]
string = string.replace(',', ' ')
return [astype(i) for i in string.split()]
def _try_float(s):
try:
float(s)
return(float(s))
except:
return s
def _check_conf(conf, checklist=None):
assert isinstance(conf, dict), "{} is not a dict.".format(conf)
ret = {}
for k,v in conf.items():
if isinstance(v, str):
v = _try_float(v)
ret[k] = v
if checklist is not None:
for k, t in checklist:
assert k in ret, "required argument {} is NOT exist in config file.".format(k)
assert isintance(ret[k], t), "value type of argument {} should be {}".format(k, t)
return ret
# TODO: 增加None机制,允许hidden size、batch size和seqlen设置为None
def _check_io(in_attr, out_attr, strict=False, in_name="left", out_name="right"):
for name, attr in in_attr.items():
assert name in out_attr, in_name+': '+name+' not found in '+out_name
if attr != out_attr[name]:
if strict:
raise ValueError(name+': shape or dtype not consistent!')
else:
logging.warning('{}: shape or dtype not consistent!\n{}:\n{}\n{}:\n{}'.format(name, in_name, attr, out_name, out_attr[name]))
def _merge_conf(conf1, conf2, conf1_first=True, strict=False):
assert isinstance(conf1, dict), "{} is not a dict.".format(conf1)
assert isinstance(conf2, dict), "{} is not a dict.".format(conf2)
base_conf = conf2 if conf1_first else conf1
base_conf = base_conf.copy()
new_conf = conf1 if conf1_first else conf2
for k, v in new_conf.items():
if k in base_conf:
if base_conf[k] != v:
raise Warning("value of argument {} has been updated to {}.".format(k, v))
else:
if strict:
continue
base_conf[k] = v
return base_conf
def _encode_inputs(inputs, scope_name, sep='/', cand_set=None):
outputs = {}
for k, v in inputs.items():
if cand_set is not None:
if k in cand_set:
outputs[k] = v
if scope_name+sep+k in cand_set:
outputs[scope_name+sep+k] = v
else:
outputs[scope_name+sep+k] = v
return outputs
def _decode_inputs(inputs, scope_name, sep='/', keep_unk_keys=True):
outputs = {}
for name, value in inputs.items():
# var for backbone are also available to tasks
if keep_unk_keys and sep not in name:
outputs[name] = value
# var for this inst
if name.startswith(scope_name+'/'):
outputs[name[len(scope_name+'/'):]] = value
return outputs
def _init_env(use_gpu):
if use_gpu:
place = fluid.CUDAPlace(0)
dev_count = fluid.core.get_cuda_device_count()
else:
place = fluid.CPUPlace()
dev_count = int(os.environ.get('CPU_NUM', multiprocessing.cpu_count()))
return fluid.Executor(place), dev_count
def _fit_attr(conf, fit_attr, strict=False):
for i, attr in fit_attr.items():
if i not in conf:
if strict:
raise Exception('Argument {} is required to create a controller.'.format(i))
else:
continue
conf[i] = attr(conf[i])
return conf
class ConfigController(object):
def __init__(self, config, task_dir='.', for_train=True):
"""
Args:
config: (str|dict) 字符串类型时,给出yaml格式的config配置文件路径;
"""
self._for_train = for_train
assert isinstance(config, str) or isinstance(config, dict), "a config dict or config file path is required to create a Controller."
if isinstance(config, str):
mtl_conf = _parse_yaml(config, support_cmd_line=True)
else:
mtl_conf = config
mtl_conf = _check_conf(mtl_conf)
mtl_conf = _fit_attr(mtl_conf, REQUIRED_ARGS, strict=True)
mtl_conf = _fit_attr(mtl_conf, OPTIONAL_ARGS, strict=False)
exe, dev_count = _init_env(use_gpu=mtl_conf.get('use_gpu', True))
self.exe = exe
self.dev_count = dev_count
print_dict(mtl_conf, title='global configuration')
# parse task instances and target tags
instnames = _parse_list(mtl_conf['task_instance'])
assert len(instnames) == len(set(instnames)), "repeated task_instance is NOT supported."
num_instances = len(instnames)
self.num_instances = num_instances
instname_to_conf = {}
instname_to_id = {}
for id, instname in enumerate(instnames):
instpath = os.path.join(task_dir, instname+'.yaml')
conf = _parse_yaml(instpath, support_cmd_line=False)
# conf = _check_conf(conf, TASK_INSTANCE_REQUIRED_ARGS)
conf = _check_conf(conf)
temp_conf = _merge_conf(mtl_conf, conf, strict=True)
print_dict(temp_conf, title='{} configuration'.format(instname))
conf = _merge_conf(mtl_conf, conf)
instname_to_conf[instname] = conf
instname_to_id[instname] = id
# prepare backbone
if 'backbone_config_path' in mtl_conf:
bb_conf = _parse_json(mtl_conf['backbone_config_path'])
bb_conf = _merge_conf(mtl_conf, bb_conf)
else:
bb_conf = mtl_conf
print_dict(bb_conf, title = 'backbone configuration'.format(instname))
bb_name = mtl_conf['backbone']
bb_mod = importlib.import_module(BACKBONE_DIR + '.' + bb_name)
Backbone = getattr(bb_mod, 'Model')
# create task instances
instances = []
for name in instnames:
instances.append(TaskInstance(name, instname_to_id[name], instname_to_conf[name]))
check_instances(instances)
# parse target_tag
if 'target_tag' in mtl_conf:
target_tag = str(mtl_conf['target_tag'])
tags = _parse_list(target_tag, astype=int)
assert len(tags) == len(instnames), "number of target_tag is NOT consistent with that in task_instance."
for tag, inst in zip(tags, instances):
inst.is_target = tag
else:
tags = [i.is_target for i in instances]
num_targets = sum(tags)
num_auxes = num_instances - num_targets
# parse mix ratios
if 'mix_ratio' in mtl_conf:
mix_ratio = str(mtl_conf['mix_ratio'])
mrs = _parse_list(mix_ratio, astype=float)
assert len(mrs) == num_instances, "number of mix_ratios is NOT consistent with num_instances."
else:
mrs = [1.0] * num_instances
for mr, inst in zip(mrs, instances):
inst.mix_ratio = mr
# parse task layer reuse tags
instname_to_reusehost = {i:i for i in instnames}
if 'task_reuse_tag' in mtl_conf:
tags = _parse_list(mtl_conf['task_reuse_tag'], astype=int)
assert len(tags) == num_targets, 'number of reuse_tags is NOT consistent with number of instances.'
else:
tags = []
mapper = {}
for inst in instances:
history = set()
history.add(inst.name)
cur_inst = inst
while True:
if cur_inst.task_reuse_scope in history:
mapper[inst.name] = len(tags)
break
elif cur_inst.task_reuse_scope in mapper:
mapper[inst.name] = mapper[cur_inst.task_reuse_scope]
break
else:
cur_inst = name_to_instance[cur_inst.task_reuse_scope]
history.add(cur_inst.name)
tags.append(mapper[inst.name])
for i in range(1, num_instances):
for j in range(i):
if tags[i] == tags[j]:
assert instances[i].Paradigm == \
instances[j].Paradigm, \
"paradigm of reuse tasks should be consistent"
instances[i].task_reuse_scope = instances[j].name
break
self.instances = instances
self.mrs = mrs
self.Backbone = Backbone
self.bb_conf = bb_conf
self.bb_name = bb_name
self.has_init_train = False
self.has_init_pred = False
if self._for_train:
print("initialing for training...")
self._init_train()
self.has_init_train = True
def _init_train(self):
instances = self.instances
Backbone = self.Backbone
bb_conf = self.bb_conf
bb_name = self.bb_name
dev_count = self.dev_count
num_instances = len(instances)
mrs = self.mrs
# set first_target/main task instance
main_inst = None
for inst in instances:
if inst.is_target:
main_inst = inst
inst.is_first_target = True
break
main_conf = main_inst.config
if not os.path.exists(main_conf['save_path']):
os.makedirs(main_conf['save_path'])
os.makedirs(os.path.join(main_conf['save_path'], 'ckpt'))
# prepare backbone
train_backbone = Backbone(bb_conf, phase='train')
pred_backbone = Backbone(bb_conf, phase='pred')
# create reader, task
# then check i/o across reader, backbone and task_layer
task_attrs = []
pred_task_attrs = []
for inst in instances:
train_reader = inst.Reader(inst.config, phase='train')
inst.reader['train'] = train_reader
train_parad = inst.Paradigm(inst.config, phase='train', backbone_config=bb_conf)
inst.task_layer['train'] = train_parad
task_attr_from_reader = _encode_inputs(train_parad.inputs_attrs['reader'], inst.name)
task_attrs.append(task_attr_from_reader)
_check_io(train_backbone.inputs_attr, train_reader.outputs_attr, in_name=bb_name+'_backbone', out_name='reader.train')
_check_io(train_parad.inputs_attrs['reader'], train_reader.outputs_attr, in_name='task_paradigm.train.reader', out_name='reader.train')
_check_io(train_parad.inputs_attrs['backbone'], train_backbone.outputs_attr, in_name='task_paradigm.train.backbone', out_name=bb_name+'_backbone')
if inst.is_target:
if 'pred_file' not in inst.config:
inst.config['pred_file'] = ''
pred_reader = inst.Reader(inst.config, phase='pred')
pred_parad = inst.Paradigm(inst.config, phase='pred', backbone_config=bb_conf)
inst.task_layer['pred'] = pred_parad
task_attr_from_reader = _encode_inputs(pred_parad.inputs_attrs['reader'], inst.name)
pred_task_attrs.append(task_attr_from_reader)
_check_io(pred_backbone.inputs_attr, pred_reader.outputs_attr, in_name=bb_name+'_backbone', out_name='reader.pred')
_check_io(pred_parad.inputs_attrs['reader'], pred_reader.outputs_attr, in_name='task_paradigm.pred.reader', out_name='reader.pred')
_check_io(pred_parad.inputs_attrs['backbone'], pred_backbone.outputs_attr, in_name='task_paradigm.pred.backbone', out_name=bb_name+'_backbone')
# merge reader input attrs from backbone and task_instances
joint_input_names, joint_shape_and_dtypes, name_to_position = merge_input_attrs(train_backbone.inputs_attr, task_attrs)
pred_joint_input_names, pred_joint_shape_and_dtypes, _ = merge_input_attrs(pred_backbone.inputs_attr, pred_task_attrs, insert_taskid=False, insert_batchsize=False, insert_seqlen=False, insert_batchsize_x_seqlen=False)
# shapes: [task_id, shapes_of_backbone, shapes_of_inst1, ..., shapes_of_instN]
if DEBUG:
print('----- for debug -----')
print('joint input names:')
print(joint_input_names)
print('joint input shape and dtypes:')
print(joint_shape_and_dtypes)
# load data
for inst in instances:
print(inst.name+": preparing data...", end='')
inst.reader['train'].load_data()
print('ok!')
# merge dataset iterators and create net input vars
iterators = []
prefixes = []
mrs = []
for inst in instances:
iterators.append(inst.reader['train'].iterator())
prefixes.append(inst.name)
mrs.append(inst.mix_ratio)
joint_iterator_fn = create_joint_iterator_fn(iterators, prefixes, joint_shape_and_dtypes, mrs, name_to_position, dev_count=dev_count, verbose=VERBOSE, return_type='dict')
self._joint_iterator_fn = joint_iterator_fn
input_attrs = [[i, j, k] for i, (j,k) in zip(joint_input_names, joint_shape_and_dtypes)]
pred_input_attrs = [[i, j, k] for i, (j,k) in zip(pred_joint_input_names, pred_joint_shape_and_dtypes)]
# net_inputs = create_net_inputs(input_attrs, async=True, iterator_fn=joint_iterator_fn, dev_count=dev_count, n_prefetch=3)
net_inputs = create_net_inputs(input_attrs, async=False)
self._net_inputs = net_inputs
# build backbone and task layers
train_prog = fluid.default_main_program()
train_init_prog = fluid.default_startup_program()
bb_output_vars = train_backbone.build(net_inputs, scope_name='__paddlepalm_')
assert sorted(bb_output_vars.keys()) == sorted(train_backbone.outputs_attr.keys())
pred_prog = fluid.Program()
pred_init_prog = fluid.Program()
with fluid.program_guard(main_program = pred_prog, startup_program = pred_init_prog):
pred_net_inputs = create_net_inputs(pred_input_attrs)
pred_bb_output_vars = pred_backbone.build(pred_net_inputs, scope_name='__paddlepalm_')
fluid.framework.switch_main_program(train_prog)
fluid.framework.switch_startup_program(train_init_prog)
task_output_vars = {}
for inst in instances:
task_inputs = {'backbone': bb_output_vars}
task_inputs_from_reader = _decode_inputs(net_inputs, inst.name)
task_inputs['reader'] = task_inputs_from_reader
scope = inst.task_reuse_scope + '/'
with fluid.unique_name.guard(scope):
output_vars = inst.build_task_layer(task_inputs, phase='train', scope=scope)
output_vars = {inst.name+'/'+key: val for key, val in output_vars.items()}
old = len(task_output_vars) # for debug
task_output_vars.update(output_vars)
assert len(task_output_vars) - old == len(output_vars) # for debug
# prepare predict vars for saving inference model
if inst.is_target:
with fluid.program_guard(pred_prog, pred_init_prog):
cur_inputs = _decode_inputs(pred_net_inputs, inst.name)
inst.pred_input = cur_inputs
pred_task_inputs = {'backbone': pred_bb_output_vars, 'reader': cur_inputs}
scope = inst.task_reuse_scope + '/'
with fluid.unique_name.guard(scope):
inst.build_task_layer(pred_task_inputs, phase='pred', scope=scope)
bb_fetches = {k: v.name for k,v in bb_output_vars.items()}
task_fetches = {k: v.name for k,v in task_output_vars.items()}
fetches = task_fetches
fetches['__task_id'] = net_inputs['__task_id'].name
# compute loss
task_id_var = net_inputs['__task_id']
task_id_vec = fluid.one_hot(task_id_var, num_instances)
losses = fluid.layers.concat([task_output_vars[inst.name+'/loss'] for inst in instances], axis=0)
loss = layers.reduce_sum(task_id_vec * losses)
main_reader = main_inst.reader['train']
num_examples = main_reader.num_examples
for inst in instances:
max_train_steps = int(main_conf['num_epochs']* inst.mix_ratio * (num_examples // main_conf['batch_size'] // dev_count))
if inst.is_target:
print('{}: expected train steps {}.'.format(inst.name, max_train_steps))
inst.steps_pur_epoch = inst.reader['train'].num_examples // main_conf['batch_size'] // dev_count
inst.expected_train_steps = max_train_steps
global_max_train_steps = int(main_conf['num_epochs'] * sum(mrs) * (num_examples // main_conf['batch_size'] // dev_count))
print('Estimated overall train steps {}.'.format(global_max_train_steps))
if 'warmup_proportion' in main_conf and main_conf['warmup_proportion'] > 0:
warmup_steps = int(global_max_train_steps * main_conf['warmup_proportion'])
print('Warmup steps: '+str(warmup_steps))
else:
warmup_steps = 0
# build optimizer
if 'optimizer' in main_conf:
optim_mod = importlib.import_module(OPTIMIZER_DIR + '.' + main_conf['optimizer'])
optimize = getattr(optim_mod, OPTIMIZE_METHOD)
optimize(loss, main_conf, max_train_steps, warmup_steps, fluid.default_main_program())
loss.persistable = True
if main_conf.get('use_ema', False):
assert 'ema_decay' in main_conf, "ema_decay should be set when use_ema is enabled."
ema = fluid.optimizer.ExponentialMovingAverage(main_conf['ema_decay'])
ema.update()
# prepare for train
self.train_backbone = train_backbone
self.train_program = fluid.CompiledProgram(fluid.default_main_program()).with_data_parallel(loss_name=loss.name)
self.saver_program = fluid.default_main_program()
self.main_inst = main_inst
self.fetches = fetches
self.has_init_train = True
self.has_init_pred = True
self.exe.run(fluid.default_startup_program())
print("\nRandomly initialize parameters...\n")
def _init_pred(self, instance, infer_model_path):
inst = instance
if 'pred_output_path' not in inst.config:
inst.config['pred_output_path'] = os.path.join(inst.config.get('save_path', '.'), inst.name)
if not os.path.exists(inst.config['pred_output_path']):
os.makedirs(inst.config['pred_output_path'])
pred_backbone = self.Backbone(self.bb_conf, phase='pred')
pred_parad = inst.Paradigm(inst.config, phase='pred', backbone_config=self.bb_conf)
inst.task_layer['pred'] = pred_parad
pred_joint_input_names, pred_joint_shape_and_dtypes, name_to_position = merge_input_attrs(
pred_backbone.inputs_attr, inst.task_layer['pred'].inputs_attrs['reader'],
insert_taskid=False, insert_batchsize=False, insert_seqlen=False, insert_batchsize_x_seqlen=False)
pred_prog = inst.load(infer_model_path)
if inst.reader['pred'] is None:
pred_reader = inst.Reader(inst.config, phase='pred')
inst.reader['pred'] = pred_reader
return pred_prog
def load_pretrain(self, pretrain_path=None):
# load pretrain model (or ckpt)
if pretrain_path is None:
assert 'pretrain_path' in self.main_conf, "pretrain_path NOT set."
pretrain_path = self.main_conf['pretrain_path']
init_pretraining_params(
self.exe,
pretrain_path,
main_program=fluid.default_startup_program())
def train(self):
if not self.has_init_train:
self._init_train()
self.has_init_train = True
instances = self.instances
num_instances = self.num_instances
main_inst = self.main_inst
main_conf = main_inst.config
backbone = self.train_backbone
train_program = self.train_program
saver_program = self.saver_program
fetches = self.fetches
finish = []
for inst in instances:
if inst.is_target:
if inst.expected_train_steps > 0:
finish.append(False)
else:
finish.append(True)
print(inst.name+': train finished!')
inst.save()
def train_finish():
for inst in instances:
if inst.is_target:
if not inst.train_finish:
return False
return True
def pack_multicard_feed(iterator, net_inputs, dev_count):
ret = []
mask = []
for i in range(dev_count):
temp = {}
content, flag = next(iterator)
for q, var in net_inputs.items():
temp[var.name] = content[q]
ret.append(temp)
mask.append(1 if flag else 0)
return ret, mask
# do training
fetch_names, fetch_list = zip(*fetches.items())
main_step = 0 # only count for main task
global_step = 0 # count for all tasks
epoch = 0
time_begin = time.time()
backbone_buffer = []
def multi_dev_reader(reader, dev_count):
def worker(reader, dev_count, queue):
dev_batches = []
for index, data in enumerate(reader()):
if len(dev_batches) < dev_count:
dev_batches.append(data)
if len(dev_batches) == dev_count:
queue.put((dev_batches, 0))
dev_batches = []
# For the prediction of the remained batches, pad more batches to
# the number of devices and the padded samples would be removed in
# prediction outputs.
if len(dev_batches) > 0:
num_pad = dev_count - len(dev_batches)
for i in range(len(dev_batches), dev_count):
dev_batches.append(dev_batches[-1])
queue.put((dev_batches, num_pad))
queue.put(None)
queue = Queue.Queue(dev_count*2)
p = Thread(
target=worker, args=(reader, dev_count, queue))
p.daemon = True
p.start()
while True:
ret = queue.get()
if ret is not None:
batches, num_pad = ret
queue.task_done()
for batch in batches:
flag = num_pad == 0
if num_pad > 0:
num_pad -= 1
yield batch, flag
else:
break
queue.join()
joint_iterator = multi_dev_reader(self._joint_iterator_fn, self.dev_count)
while not train_finish():
feed, mask = pack_multicard_feed(joint_iterator, self._net_inputs, self.dev_count)
rt_outputs = self.exe.run(train_program, feed=feed, fetch_list=fetch_list)
rt_outputs = {k:v for k,v in zip(fetch_names, rt_outputs)}
rt_task_id = np.squeeze(rt_outputs['__task_id']).tolist()
rt_task_id = rt_task_id[0] if isinstance(rt_task_id, list) else rt_task_id
cur_task = instances[rt_task_id]
backbone_rt_outputs = {k:v for k,v in rt_outputs.items() if '/' not in k}
backbone_buffer.append(backbone.postprocess(backbone_rt_outputs))
task_rt_outputs = {k[len(cur_task.name+'/'):]: v for k,v in rt_outputs.items() if k.startswith(cur_task.name+'/')}
instances[rt_task_id].task_layer['train'].postprocess(task_rt_outputs)
global_step += 1
cur_task.cur_train_step += 1
cur_task_global_step = cur_task.cur_train_step + cur_task.cur_train_epoch * cur_task.steps_pur_epoch
if cur_task.is_target and cur_task.save_infermodel_every_n_steps > 0 and cur_task_global_step % cur_task.save_infermodel_every_n_steps == 0:
cur_task.save(suffix='.step'+str(cur_task_global_step))
if global_step % main_conf.get('print_every_n_steps', 5) == 0:
loss = rt_outputs[cur_task.name+'/loss']
loss = np.mean(np.squeeze(loss)).tolist()
time_end = time.time()
time_cost = time_end - time_begin
print("Global step: {}. Task: {}, step {}/{} (epoch {}), loss: {:.3f}, speed: {:.2f} steps/s".format(
global_step, cur_task.name, cur_task.cur_train_step, cur_task.steps_pur_epoch, cur_task.cur_train_epoch,
loss, main_conf.get('print_every_n_steps', 5) / time_cost))
time_begin = time.time()
if cur_task.train_finish and cur_task.cur_train_step + cur_task.cur_train_epoch * cur_task.steps_pur_epoch == cur_task.expected_train_steps:
print(cur_task.name+': train finished!')
cur_task.save()
if 'save_ckpt_every_n_steps' in main_conf and global_step % main_conf['save_ckpt_every_n_steps'] == 0:
save_path = os.path.join(main_conf['save_path'], 'ckpt',
"step_" + str(global_step))
fluid.io.save_persistables(self.exe, save_path, saver_program)
print('checkpoint has been saved at '+save_path)
save_path = os.path.join(main_conf['save_path'], 'ckpt',
"step_" + str(global_step))
fluid.io.save_persistables(self.exe, save_path, saver_program)
print('checkpoint has been saved at '+save_path)
print("ALL tasks train finished, exiting...")
def pred(self, task_instance, inference_model_dir=None):
if self._for_train:
raise Exception('This controller is a trainer. Please build a new controller with for_train=False for predicting.')
assert isinstance(task_instance, str)
if isinstance(inference_model_dir, str):
assert os.path.exists(inference_model_dir), inference_model_dir+" not found."
# if not self.has_init_pred and inference_model_dir is None:
# raise ValueError('infer_model_path is required for prediction.')
if inference_model_dir is None:
assert 'save_path' in self.mtl_conf, "one of the `inference_model_dir` and 'save_path' should be set to load inference model."
inference_model_dir = os.path.join(self.mtl_conf['save_path'], task_instance, 'infer_model')
instance = None
for inst in self.instances:
if inst.name == task_instance:
instance = inst
break
if instance is None:
raise ValueError(task_instance + ' is not a valid task_instance.')
pred_prog = self._init_pred(instance, inference_model_dir)
inst = instance
print(inst.name+": loading data...")
inst.reader['pred'].load_data()
fetch_names, fetch_vars = inst.pred_fetch_list
print('predicting...')
mapper = {k:v for k,v in inst.pred_input}
buf = []
for feed in inst.reader['pred'].iterator():
feed = _encode_inputs(feed, inst.name, cand_set=mapper)
feed = {mapper[k]: v for k,v in feed.items()}
rt_outputs = self.exe.run(pred_prog, feed, fetch_vars)
rt_outputs = {k:v for k,v in zip(fetch_names, rt_outputs)}
inst.postprocess(rt_outputs, phase='pred')
if inst.task_layer['pred'].epoch_inputs_attrs:
reader_outputs = inst.reader['pred'].get_epoch_outputs()
else:
reader_outputs = None
inst.epoch_postprocess({'reader':reader_outputs}, phase='pred')
if __name__ == '__main__':
assert len(sys.argv) == 2, "Usage: python mtl_controller.py <mtl_conf_path>"
conf_path = sys.argv[1]
del sys.argv[1]
controller = Controller(conf_path)
if controller.main_conf['do_train']:
controller.train()
__all__ = ["Controller"]
# -*- coding: UTF-8 -*-
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function
import os
import sys
import importlib
import multiprocessing
from paddle import fluid
from paddle.fluid import layers
import yaml
import json
import logging
import time
import numpy as np
from paddlepalm.utils.saver import init_pretraining_params, init_checkpoint
from paddlepalm.utils.config_helper import PDConfig
from paddlepalm.utils.print_helper import print_dict
from paddlepalm.utils.reader_helper import create_net_inputs, create_iterator_fn, create_joint_iterator_fn, merge_input_attrs
from paddlepalm.default_settings import *
from paddlepalm.task_instance import TaskInstance, check_instances
DEBUG=False
VERBOSE=0
def _get_basename(f):
return os.path.splitext(f)[0]
def _get_suffix(f):
return os.path.splitext(f)[-1]
def _parse_yaml(f, asdict=True, support_cmd_line=False):
assert os.path.exists(f), "file {} not found.".format(f)
if support_cmd_line:
args = PDConfig(yaml_file=f, fuse_args=True)
args.build()
return args.asdict() if asdict else args
else:
if asdict:
with open(f, "r") as fin:
yaml_config = yaml.load(fin, Loader=yaml.SafeLoader)
return yaml_config
else:
raise NotImplementedError()
def _parse_json(f, asdict=True, support_cmd_line=False):
assert os.path.exists(f), "file {} not found.".format(f)
if support_cmd_line:
args = PDConfig(json_file=f, fuse_args=support_cmd_line)
args.build()
return args.asdict() if asdict else args
else:
if asdict:
with open(f, "r") as fin:
config = json.load(fin)
return config
else:
raise NotImplementedError()
def _parse_list(string, astype=str):
assert isinstance(string, str), "{} is not a string.".format(string)
if ',' not in string:
return [astype(string)]
string = string.replace(',', ' ')
return [astype(i) for i in string.split()]
def _try_float(s):
try:
float(s)
return(float(s))
except:
return s
def _check_conf(conf, checklist=None):
assert isinstance(conf, dict), "{} is not a dict.".format(conf)
ret = {}
for k,v in conf.items():
if isinstance(v, str):
v = _try_float(v)
ret[k] = v
if checklist is not None:
for k, t in checklist:
assert k in ret, "required argument {} is NOT exist in config file.".format(k)
assert isintance(ret[k], t), "value type of argument {} should be {}".format(k, t)
return ret
# TODO: 增加None机制,允许hidden size、batch size和seqlen设置为None
def _check_io(in_attr, out_attr, strict=False, in_name="left", out_name="right"):
for name, attr in in_attr.items():
assert name in out_attr, in_name+': '+name+' not found in '+out_name
if attr != out_attr[name]:
if strict:
raise ValueError(name+': shape or dtype not consistent!')
else:
logging.warning('{}: shape or dtype not consistent!\n{}:\n{}\n{}:\n{}'.format(name, in_name, attr, out_name, out_attr[name]))
def _merge_conf(conf1, conf2, conf1_first=True, strict=False):
assert isinstance(conf1, dict), "{} is not a dict.".format(conf1)
assert isinstance(conf2, dict), "{} is not a dict.".format(conf2)
base_conf = conf2 if conf1_first else conf1
base_conf = base_conf.copy()
new_conf = conf1 if conf1_first else conf2
for k, v in new_conf.items():
if k in base_conf:
if base_conf[k] != v:
raise Warning("value of argument {} has been updated to {}.".format(k, v))
else:
if strict:
continue
base_conf[k] = v
return base_conf
def _encode_inputs(inputs, scope_name, sep='/', cand_set=None):
outputs = {}
for k, v in inputs.items():
if cand_set is not None:
if k in cand_set:
outputs[k] = v
if scope_name+sep+k in cand_set:
outputs[scope_name+sep+k] = v
else:
outputs[scope_name+sep+k] = v
return outputs
def _decode_inputs(inputs, scope_name, sep='/', keep_unk_keys=True):
outputs = {}
for name, value in inputs.items():
# var for backbone are also available to tasks
if keep_unk_keys and sep not in name:
outputs[name] = value
# var for this inst
if name.startswith(scope_name+'/'):
outputs[name[len(scope_name+'/'):]] = value
return outputs
def _init_env(use_gpu):
if use_gpu:
place = fluid.CUDAPlace(0)
dev_count = fluid.core.get_cuda_device_count()
else:
place = fluid.CPUPlace()
dev_count = int(os.environ.get('CPU_NUM', multiprocessing.cpu_count()))
return fluid.Executor(place), dev_count
def _fit_attr(conf, fit_attr, strict=False):
for i, attr in fit_attr.items():
if i not in conf:
if strict:
raise Exception('Argument {} is required to create a controller.'.format(i))
else:
continue
conf[i] = attr(conf[i])
return conf
class Controller(object):
def __init__(self, tasks, mix_ratios=None, task_reuse_tag=None, use_gpu=True):
"""
Args:
"""
exe, dev_count = _init_env(use_gpu=use_gpu)
self.exe = exe
self.dev_count = dev_count
# parse task instances and target tags
for id in len(tasks):
tasks[id]._set_id(id)
# parse mix ratios
if mix_ratios is not None:
if isinstance(mix_ratios, str):
mix_ratios = _parse_list(mix_ratios, astype=float)
else:
assert isinstance(mix_ratios, list)
assert len(mix_ratios) == len(tasks), "number of mix_ratios is NOT consistent with num_instances."
for mr, t in zip(mix_ratios, tasks):
t.mix_ratio = mr
# parse task layer reuse tags
instname_to_reusehost = {i:i for i in instnames}
if task_reuse_tag is not None:
if isinstance(task_reuse_tag, str):
tags = _parse_list(task_reuse_tag, astype=int)
else:
assert isinstance(task_reuse_tag, list)
assert len(task_reuse_tag) == len(tasks), "number of task_reuse_tag is NOT consistent with num_tasks."
tags = task_reuse_tag
else:
tags = []
mapper = {}
for inst in tasks:
history = set()
history.add(inst.name)
cur_inst = inst
while True:
if cur_inst.task_reuse_scope in history:
mapper[inst.name] = len(tags)
break
elif cur_inst.task_reuse_scope in mapper:
mapper[inst.name] = mapper[cur_inst.task_reuse_scope]
break
else:
cur_inst = name_to_instance[cur_inst.task_reuse_scope]
history.add(cur_inst.name)
tags.append(mapper[inst.name])
for i in range(1, len(tasks)):
for j in range(i):
if tags[i] == tags[j]:
# assert tasks[i].tasktype == \
# instances[j].tasktype, \
# "paradigm of reuse tasks should be consistent"
tasks[i]._task_reuse_scope = task[j].name
break
# self.instances = instances
# self.mrs = mrs
# self.Backbone = Backbone
# self.bb_conf = bb_conf
# self.bb_name = bb_name
# self.has_init_train = False
# self.has_init_pred = False
# if self._for_train:
# print("initialing for training...")
# self._init_train()
# self.has_init_train = True
#
def build_forward(self, backbone, mask_task=[]):
task_instances = self._tasks
Backbone = self.Backbone
bb_conf = self.bb_conf
bb_name = self.bb_name
dev_count = self.dev_count
num_instances = len(instances)
mrs = self.mrs
# set first_target/main task instance
main_inst = None
for inst in task_instances:
if inst.is_target:
main_inst = inst
inst._as_main = True
break
if save_path is not None and not os.path.exists(save_path):
os.makedirs(save_path)
# create reader, task
# then check i/o across reader, backbone and task_layer
task_attrs = []
pred_task_attrs = []
for inst in task_instances:
task_attr_from_reader = _encode_inputs(inst._taskblock['train'].inputs_attrs['reader'], inst.name)
task_attrs.append(task_attr_from_reader)
_check_io(backbone.inputs_attr, inst._reader['train'].outputs_attr, in_name=bb_name+'_backbone', out_name='reader.train')
_check_io(inst.taskblock['train'].inputs_attrs['reader'], inst._reader['train'].outputs_attr, in_name='task_paradigm.train.reader', out_name='reader.train')
_check_io(inst._taskblock['train'].inputs_attrs['backbone'], train_backbone.outputs_attr, in_name='task_paradigm.train.backbone', out_name=bb_name+'_backbone')
if inst.is_target:
if 'pred_file' not in inst.config:
inst.config['pred_file'] = ''
pred_reader = inst.Reader(inst.config, phase='pred')
pred_parad = inst.Paradigm(inst.config, phase='pred', backbone_config=bb_conf)
inst.task_layer['pred'] = pred_parad
task_attr_from_reader = _encode_inputs(pred_parad.inputs_attrs['reader'], inst.name)
pred_task_attrs.append(task_attr_from_reader)
_check_io(pred_backbone.inputs_attr, pred_reader.outputs_attr, in_name=bb_name+'_backbone', out_name='reader.pred')
_check_io(pred_parad.inputs_attrs['reader'], pred_reader.outputs_attr, in_name='task_paradigm.pred.reader', out_name='reader.pred')
_check_io(pred_parad.inputs_attrs['backbone'], pred_backbone.outputs_attr, in_name='task_paradigm.pred.backbone', out_name=bb_name+'_backbone')
# merge reader input attrs from backbone and task_instances
joint_input_names, joint_shape_and_dtypes, name_to_position = merge_input_attrs(train_backbone.inputs_attr, task_attrs)
pred_joint_input_names, pred_joint_shape_and_dtypes, _ = merge_input_attrs(pred_backbone.inputs_attr, pred_task_attrs, insert_taskid=False, insert_batchsize=False, insert_seqlen=False, insert_batchsize_x_seqlen=False)
# shapes: [task_id, shapes_of_backbone, shapes_of_inst1, ..., shapes_of_instN]
if DEBUG:
print('----- for debug -----')
print('joint input names:')
print(joint_input_names)
print('joint input shape and dtypes:')
print(joint_shape_and_dtypes)
# load data
for inst in instances:
print(inst.name+": preparing data...", end='')
inst.reader['train'].load_data()
print('ok!')
# merge dataset iterators and create net input vars
iterators = []
prefixes = []
mrs = []
for inst in instances:
iterators.append(inst.reader['train'].iterator())
prefixes.append(inst.name)
mrs.append(inst.mix_ratio)
joint_iterator_fn = create_joint_iterator_fn(iterators, prefixes, joint_shape_and_dtypes, mrs, name_to_position, dev_count=dev_count, verbose=VERBOSE)
input_attrs = [[i, j, k] for i, (j,k) in zip(joint_input_names, joint_shape_and_dtypes)]
pred_input_attrs = [[i, j, k] for i, (j,k) in zip(pred_joint_input_names, pred_joint_shape_and_dtypes)]
net_inputs = create_net_inputs(input_attrs, async=True, iterator_fn=joint_iterator_fn, dev_count=dev_count, n_prefetch=3)
# build backbone and task layers
train_prog = fluid.default_main_program()
train_init_prog = fluid.default_startup_program()
bb_output_vars = train_backbone.build(net_inputs, scope_name='__paddlepalm_')
assert sorted(bb_output_vars.keys()) == sorted(train_backbone.outputs_attr.keys())
pred_prog = fluid.Program()
pred_init_prog = fluid.Program()
with fluid.program_guard(main_program = pred_prog, startup_program = pred_init_prog):
pred_net_inputs = create_net_inputs(pred_input_attrs)
pred_bb_output_vars = pred_backbone.build(pred_net_inputs, scope_name='__paddlepalm_')
fluid.framework.switch_main_program(train_prog)
fluid.framework.switch_startup_program(train_init_prog)
task_output_vars = {}
for inst in instances:
task_inputs = {'backbone': bb_output_vars}
task_inputs_from_reader = _decode_inputs(net_inputs, inst.name)
task_inputs['reader'] = task_inputs_from_reader
scope = inst.task_reuse_scope + '/'
with fluid.unique_name.guard(scope):
output_vars = inst.build_task_layer(task_inputs, phase='train', scope=scope)
output_vars = {inst.name+'/'+key: val for key, val in output_vars.items()}
old = len(task_output_vars) # for debug
task_output_vars.update(output_vars)
assert len(task_output_vars) - old == len(output_vars) # for debug
# prepare predict vars for saving inference model
if inst.is_target:
with fluid.program_guard(pred_prog, pred_init_prog):
cur_inputs = _decode_inputs(pred_net_inputs, inst.name)
inst.pred_input = cur_inputs
pred_task_inputs = {'backbone': pred_bb_output_vars, 'reader': cur_inputs}
scope = inst.task_reuse_scope + '/'
with fluid.unique_name.guard(scope):
inst.build_task_layer(pred_task_inputs, phase='pred', scope=scope)
bb_fetches = {k: v.name for k,v in bb_output_vars.items()}
task_fetches = {k: v.name for k,v in task_output_vars.items()}
fetches = task_fetches
fetches['__task_id'] = net_inputs['__task_id'].name
# compute loss
task_id_var = net_inputs['__task_id']
task_id_vec = layers.one_hot(task_id_var, num_instances)
losses = fluid.layers.concat([task_output_vars[inst.name+'/loss'] for inst in instances], axis=0)
loss = layers.reduce_sum(task_id_vec * losses)
def init_train(self, basetask, num_epochs, ):
main_reader = main_inst.reader['train']
num_examples = main_reader.num_examples
for inst in instances:
max_train_steps = int(main_conf['num_epochs']* inst.mix_ratio * (num_examples // main_conf['batch_size'] // dev_count))
if inst.is_target:
print('{}: expected train steps {}.'.format(inst.name, max_train_steps))
inst.steps_pur_epoch = inst.reader['train'].num_examples // main_conf['batch_size'] // dev_count
inst.expected_train_steps = max_train_steps
global_max_train_steps = int(main_conf['num_epochs'] * sum(mrs) * (num_examples // main_conf['batch_size'] // dev_count))
print('Estimated overall train steps {}.'.format(global_max_train_steps))
# if 'warmup_proportion' in main_conf and main_conf['warmup_proportion'] > 0:
# warmup_steps = int(global_max_train_steps * main_conf['warmup_proportion'])
# print('Warmup steps: '+str(warmup_steps))
# else:
# warmup_steps = 0
return loss, max_train_steps
def build_backward(self, optimizer, use_ema=False, ema_decay=0.9999):
# build optimizer
optimizer.optimize(fluid.default_main_program())
# loss.persistable = True
if use_ema:
ema = fluid.optimizer.ExponentialMovingAverage(ema_decay)
ema.update()
def random_init_params(self):
if not self._init_finish:
# prepare for train
self.train_program = fluid.CompiledProgram(fluid.default_main_program()).with_data_parallel(loss_name=loss.name)
self.saver_program = fluid.default_main_program()
self._init_finish = True
print("\nRandomly initialize parameters...\n")
self.exe.run(fluid.default_startup_program())
def load_pretrain_params(self, pretrain_model_path=None):
# load pretrain model (or ckpt)
if pretrain_model_path is None:
assert 'pretrain_model_path' in self.main_conf, "pretrain_model_path NOT set."
pretrain_model_path = self.main_conf['pretrain_model_path']
init_pretraining_params(
self.exe,
pretrain_model_path,
main_program=fluid.default_startup_program())
if not self._init_finish:
self.train_program = fluid.CompiledProgram(fluid.default_main_program()).with_data_parallel(loss_name=loss.name)
self.saver_program = fluid.default_main_program()
self._init_finish = True
def load_infermodel(self, instance, infer_model_path):
inst = instance
if 'pred_output_path' not in inst.config:
inst.config['pred_output_path'] = os.path.join(inst.config.get('save_path', '.'), inst.name)
if not os.path.exists(inst.config['pred_output_path']):
os.makedirs(inst.config['pred_output_path'])
pred_backbone = self.Backbone(self.bb_conf, phase='pred')
pred_parad = inst.Paradigm(inst.config, phase='pred', backbone_config=self.bb_conf)
inst.task_layer['pred'] = pred_parad
pred_joint_input_names, pred_joint_shape_and_dtypes, name_to_position = merge_input_attrs(
pred_backbone.inputs_attr, inst.task_layer['pred'].inputs_attrs['reader'],
insert_taskid=False, insert_batchsize=False, insert_seqlen=False, insert_batchsize_x_seqlen=False)
pred_prog = inst.load(infer_model_path)
if inst.reader['pred'] is None:
pred_reader = inst.Reader(inst.config, phase='pred')
inst.reader['pred'] = pred_reader
return pred_prog
def train(self, num_epochs):
if not self._init_finish:
raise Exception('params has not been initialized! Please init params with random_init_params or load_pretrain_params.')
instances = self.instances
num_instances = self.num_instances
main_inst = self.main_inst
main_conf = main_inst.config
backbone = self.train_backbone
train_program = self.train_program
saver_program = self.saver_program
fetches = self.fetches
finish = []
for inst in instances:
if inst.is_target:
if inst.expected_train_steps > 0:
finish.append(False)
else:
finish.append(True)
print(inst.name+': train finished!')
inst.save()
def train_finish():
for inst in instances:
if inst.is_target:
if not inst.train_finish:
return False
return True
# do training
fetch_names, fetch_list = zip(*fetches.items())
main_step = 0 # only count for main task
global_step = 0 # count for all tasks
epoch = 0
time_begin = time.time()
backbone_buffer = []
while not train_finish():
rt_outputs = self.exe.run(train_program, fetch_list=fetch_list)
rt_outputs = {k:v for k,v in zip(fetch_names, rt_outputs)}
rt_task_id = np.squeeze(rt_outputs['__task_id']).tolist()
rt_task_id = rt_task_id[0] if isinstance(rt_task_id, list) else rt_task_id
cur_task = instances[rt_task_id]
backbone_rt_outputs = {k:v for k,v in rt_outputs.items() if '/' not in k}
backbone_buffer.append(backbone.postprocess(backbone_rt_outputs))
task_rt_outputs = {k[len(cur_task.name+'/'):]: v for k,v in rt_outputs.items() if k.startswith(cur_task.name+'/')}
instances[rt_task_id].task_layer['train'].postprocess(task_rt_outputs)
global_step += 1
cur_task.cur_train_step += 1
if cur_task.save_infermodel_every_n_steps > 0 and cur_task.cur_train_step % cur_task.save_infermodel_every_n_steps == 0:
cur_task.save(suffix='.step'+str(cur_task.cur_train_step))
if global_step % main_conf.get('print_every_n_steps', 5) == 0:
loss = rt_outputs[cur_task.name+'/loss']
loss = np.mean(np.squeeze(loss)).tolist()
time_end = time.time()
time_cost = time_end - time_begin
print("Global step: {}. Task: {}, step {}/{} (epoch {}), loss: {:.3f}, speed: {:.2f} steps/s".format(
global_step, cur_task.name, cur_task.cur_train_step, cur_task.steps_pur_epoch, cur_task.cur_train_epoch,
loss, main_conf.get('print_every_n_steps', 5) / time_cost))
time_begin = time.time()
if cur_task.train_finish and cur_task.cur_train_step + cur_task.cur_train_epoch * cur_task.steps_pur_epoch == cur_task.expected_train_steps:
print(cur_task.name+': train finished!')
cur_task.save()
if 'save_every_n_steps' in main_conf and global_step % main_conf['save_every_n_steps'] == 0:
save_path = os.path.join(main_conf['save_path'],
"step_" + str(global_step))
fluid.io.save_persistables(self.exe, save_path, saver_program)
print("ALL tasks train finished, exiting...")
def pred(self, task_instance, inference_model_dir=None):
if self._for_train:
raise Exception('This controller is a trainer. Please build a new controller with for_train=False for predicting.')
assert isinstance(task_instance, str)
if isinstance(inference_model_dir, str):
assert os.path.exists(inference_model_dir), inference_model_dir+" not found."
# if not self.has_init_pred and inference_model_dir is None:
# raise ValueError('infer_model_path is required for prediction.')
if inference_model_dir is None:
assert 'save_path' in self.mtl_conf, "one of the `inference_model_dir` and 'save_path' should be set to load inference model."
inference_model_dir = os.path.join(self.mtl_conf['save_path'], task_instance, 'infer_model')
instance = None
for inst in self.instances:
if inst.name == task_instance:
instance = inst
break
if instance is None:
raise ValueError(task_instance + ' is not a valid task_instance.')
pred_prog = self._init_pred(instance, inference_model_dir)
inst = instance
print(inst.name+": loading data...")
inst.reader['pred'].load_data()
fetch_names, fetch_vars = inst.pred_fetch_list
print('predicting...')
mapper = {k:v for k,v in inst.pred_input}
buf = []
for feed in inst.reader['pred'].iterator():
feed = _encode_inputs(feed, inst.name, cand_set=mapper)
feed = {mapper[k]: v for k,v in feed.items()}
rt_outputs = self.exe.run(pred_prog, feed, fetch_vars)
rt_outputs = {k:v for k,v in zip(fetch_names, rt_outputs)}
inst.postprocess(rt_outputs, phase='pred')
if inst.task_layer['pred'].epoch_inputs_attrs:
reader_outputs = inst.reader['pred'].get_epoch_outputs()
else:
reader_outputs = None
inst.epoch_postprocess({'reader':reader_outputs}, phase='pred')
if __name__ == '__main__':
assert len(sys.argv) == 2, "Usage: python mtl_controller.py <mtl_conf_path>"
conf_path = sys.argv[1]
del sys.argv[1]
controller = Controller(conf_path)
if controller.main_conf['do_train']:
controller.train()
# -*- coding: UTF-8 -*-
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
BACKBONE_DIR='paddlepalm.backbone'
TASK_INSTANCE_DIR='paddlepalm.task_instance'
READER_DIR='paddlepalm.reader'
PARADIGM_DIR='paddlepalm.task_paradigm'
OPTIMIZER_DIR='paddlepalm.optimizer'
OPTIMIZE_METHOD='optimize'
REQUIRED_ARGS={
'task_instance': str,
'backbone': str,
'optimizer': str,
'learning_rate': float,
'batch_size': int
}
OPTIONAL_ARGS={
'mix_ratio': str,
'target_tag': str,
'reuse_rag': str
}
TASK_REQUIRED_ARGS={
'paradigm': str,
'reader': str,
'train_file': str
}
...@@ -5,5 +5,5 @@ import multiprocessing ...@@ -5,5 +5,5 @@ import multiprocessing
gpu_dev_count = int(fluid.core.get_cuda_device_count()) gpu_dev_count = int(fluid.core.get_cuda_device_count())
cpu_dev_count = int(os.environ.get('CPU_NUM', multiprocessing.cpu_count())) cpu_dev_count = int(os.environ.get('CPU_NUM', multiprocessing.cpu_count()))
from reader import yield_pieces, data_feeder from reader import yield_pieces, data_feeder, decode_fake
...@@ -11,8 +11,8 @@ def yield_pieces(data, distribute_strategy, batch_size): ...@@ -11,8 +11,8 @@ def yield_pieces(data, distribute_strategy, batch_size):
distribute_strategy: support s=split, c=copy, u=unstack, distribute_strategy: support s=split, c=copy, u=unstack,
""" """
assert batch_size % dev_count == 0, "batch_size need to be integer times larger than dev_count." assert batch_size % dev_count == 0, "batch_size need to be integer times larger than dev_count."
print('data in yield pieces') # print('data in yield pieces')
print(len(data)) # print(len(data))
assert type(data) == type(distribute_strategy), [type(data), type(distribute_strategy)] assert type(data) == type(distribute_strategy), [type(data), type(distribute_strategy)]
assert len(data) == len(distribute_strategy), [len(data), len(distribute_strategy)] assert len(data) == len(distribute_strategy), [len(data), len(distribute_strategy)]
...@@ -24,7 +24,6 @@ def yield_pieces(data, distribute_strategy, batch_size): ...@@ -24,7 +24,6 @@ def yield_pieces(data, distribute_strategy, batch_size):
assert isinstance(data, list), "the input data must be a list or dict, and contained with multiple tensors." assert isinstance(data, list), "the input data must be a list or dict, and contained with multiple tensors."
data_list = data data_list = data
ds_list = distribute_strategy ds_list = distribute_strategy
stride = batch_size // dev_count stride = batch_size // dev_count
p = stride p = stride
# while p < len(data_list) + stride: # while p < len(data_list) + stride:
...@@ -34,14 +33,14 @@ def yield_pieces(data, distribute_strategy, batch_size): ...@@ -34,14 +33,14 @@ def yield_pieces(data, distribute_strategy, batch_size):
s = s.strip().lower() s = s.strip().lower()
if s == 's' or s == 'split': if s == 's' or s == 'split':
if p - stride >= len(d): if p - stride >= len(d):
print('WARNING: no more examples to feed empty devices') # print('WARNING: no more examples to feed empty devices')
temp = [] temp = []
return return
temp.append(d[p-stride:p]) temp.append(d[p-stride:p])
elif s == 'u' or s == 'unstack': elif s == 'u' or s == 'unstack':
assert len(d) <= dev_count, 'Tensor size on dim 0 must be less equal to dev_count when unstack is applied.' assert len(d) <= dev_count, 'Tensor size on dim 0 must be less equal to dev_count when unstack is applied.'
if p//stride > len(d): if p//stride > len(d):
print('WARNING: no more examples to feed empty devices') # print('WARNING: no more examples to feed empty devices')
return return
temp.append(d[p//stride-1]) temp.append(d[p//stride-1])
elif s == 'c' or s == 'copy': elif s == 'c' or s == 'copy':
...@@ -53,12 +52,11 @@ def yield_pieces(data, distribute_strategy, batch_size): ...@@ -53,12 +52,11 @@ def yield_pieces(data, distribute_strategy, batch_size):
if type(data) == dict: if type(data) == dict:
yield dict(zip(*[keys, temp])) yield dict(zip(*[keys, temp]))
else: else:
print('yielded pieces') # print('yielded pieces')
print(len(temp)) # print(len(temp))
yield temp yield temp
def data_feeder(reader, postprocess_fn=None, prefetch_steps=2): def data_feeder(reader, postprocess_fn=None, prefetch_steps=2):
if postprocess_fn is None: if postprocess_fn is None:
def postprocess_fn(batch): def postprocess_fn(batch):
return batch return batch
...@@ -98,12 +96,27 @@ def data_feeder(reader, postprocess_fn=None, prefetch_steps=2): ...@@ -98,12 +96,27 @@ def data_feeder(reader, postprocess_fn=None, prefetch_steps=2):
flag = idx-len(batches) < -num_pad flag = idx-len(batches) < -num_pad
# if num_pad > 0: # if num_pad > 0:
# num_pad -= 1 # num_pad -= 1
# batch = postprocess_fn(batch, id)
batch = postprocess_fn(batch) batch = postprocess_fn(batch)
batch_buf.append(batch) batch_buf.append(batch)
flag_buf.append(flag) flag_buf.append(flag)
yield batch_buf, flag_buf yield batch_buf, flag_buf
else: else:
break break
queue.join() queue.join()
def decode_fake(nums, mask, bs):
n_t = 0
for flag in mask:
if not flag:
break
n_t = n_t + 1
n_f = len(mask) - n_t
p1 = nums - (n_t-1) * bs
each_f = p1 / (n_f+1)
return each_f * n_f
from _downloader import * from _downloader import *
\ No newline at end of file
from cls import Classify from cls import Classify
# from match import Match from match import Match
# from mrc import MRC from ner import SequenceLabel
# from mlm import MaskLM from mrc import MRC
from mlm import MaskLM
...@@ -13,16 +13,20 @@ ...@@ -13,16 +13,20 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import os
import json
class BaseHead(object): class Head(object):
def __init__(self, config, phase, backbone_config): def __init__(self, phase='train'):
""" """
config: dict类型。描述了 任务实例(task instance)+多任务配置文件 中定义超参数 config: dict类型。描述了 任务实例(task instance)+多任务配置文件 中定义超参数
phase: str类型。运行阶段,目前支持train和predict phase: str类型。运行阶段,目前支持train和predict
""" """
self._stop_gradient = {} self._stop_gradient = {}
self._phase = phase
self._prog = None self._prog = None
self._results_buffer = []
@property @property
def inputs_attrs(self): def inputs_attrs(self):
...@@ -67,10 +71,31 @@ class BaseHead(object): ...@@ -67,10 +71,31 @@ class BaseHead(object):
raise NotImplementedError() raise NotImplementedError()
def postprocess(self, rt_outputs): def batch_postprocess(self, rt_outputs):
"""每个训练或推理step后针对当前batch的task_layer的runtime计算结果进行相关后处理。注意,rt_outputs除了包含build方法,还自动包含了loss的计算结果。""" """每个训练或推理step后针对当前batch的task_layer的runtime计算结果进行相关后处理。注意,rt_outputs除了包含build方法,还自动包含了loss的计算结果。"""
pass if isinstance(rt_outputs, dict):
keys = rt_outputs.keys()
vals = [rt_outputs[k] for k in keys]
lens = [len(v) for v in vals]
if len(set(lens)) == 1:
results = [dict(zip(*[keys, i])) for i in zip(*vals)]
self._results_buffer.extend(results)
return results
else:
print('WARNING: irregular output results. visualize failed.')
self._results_buffer.append(rt_outputs)
return None
def epoch_postprocess(self, post_inputs, output_dir=None):
if output_dir is not None:
for i in self._results_buffer:
print(i)
else:
if not os.path.exists(output_dir):
os.makedirs(output_dir)
with open(os.path.join(output_dir, self._phase), 'w') as writer:
for i in self._results_buffer:
writer.write(json.dumps(i)+'\n')
def epoch_postprocess(self, post_inputs):
pass
...@@ -15,12 +15,13 @@ ...@@ -15,12 +15,13 @@
import paddle.fluid as fluid import paddle.fluid as fluid
from paddle.fluid import layers from paddle.fluid import layers
from paddlepalm.head.base_head import BaseHead from paddlepalm.head.base_head import Head
import numpy as np import numpy as np
import os import os
import json
class Classify(BaseHead): class Classify(Head):
""" """
classification classification
""" """
...@@ -37,6 +38,7 @@ class Classify(BaseHead): ...@@ -37,6 +38,7 @@ class Classify(BaseHead):
self._param_initializer = fluid.initializer.TruncatedNormal( self._param_initializer = fluid.initializer.TruncatedNormal(
scale=param_initializer_range) scale=param_initializer_range)
self._preds = [] self._preds = []
self._probs = []
@property @property
def inputs_attrs(self): def inputs_attrs(self):
...@@ -51,7 +53,8 @@ class Classify(BaseHead): ...@@ -51,7 +53,8 @@ class Classify(BaseHead):
if self._is_training: if self._is_training:
return {'loss': [[1], 'float32']} return {'loss': [[1], 'float32']}
else: else:
return {'logits': [[-1, self.num_classes], 'float32']} return {'logits': [[-1, self.num_classes], 'float32'],
'probs': [[-1, self.num_classes], 'float32']}
def build(self, inputs, scope_name=''): def build(self, inputs, scope_name=''):
sent_emb = inputs['backbone']['sentence_embedding'] sent_emb = inputs['backbone']['sentence_embedding']
...@@ -71,30 +74,46 @@ class Classify(BaseHead): ...@@ -71,30 +74,46 @@ class Classify(BaseHead):
bias_attr=fluid.ParamAttr( bias_attr=fluid.ParamAttr(
name=scope_name+"cls_out_b", initializer=fluid.initializer.Constant(0.))) name=scope_name+"cls_out_b", initializer=fluid.initializer.Constant(0.)))
probs = fluid.layers.softmax(logits)
if self._is_training: if self._is_training:
inputs = fluid.layers.softmax(logits)
loss = fluid.layers.cross_entropy( loss = fluid.layers.cross_entropy(
input=inputs, label=label_ids) input=probs, label=label_ids)
loss = layers.mean(loss) loss = layers.mean(loss)
return {"loss": loss} return {"loss": loss}
else: else:
return {"logits":logits} return {"logits":logits,
"probs":probs}
def batch_postprocess(self, rt_outputs): def batch_postprocess(self, rt_outputs):
if not self._is_training: if not self._is_training:
logits = rt_outputs['logits'] logits = rt_outputs['logits']
preds = np.argmax(logits, -1) probs = rt_outputs['probs']
self._preds.extend(preds.tolist()) self._preds.extend(logits.tolist())
return preds self._probs.extend(probs.tolist())
def epoch_postprocess(self, post_inputs): def epoch_postprocess(self, post_inputs, output_dir=None):
# there is no post_inputs needed and not declared in epoch_inputs_attrs, hence no elements exist in post_inputs # there is no post_inputs needed and not declared in epoch_inputs_attrs, hence no elements exist in post_inputs
if not self._is_training: if not self._is_training:
if self._pred_output_path is None: if output_dir is None:
raise ValueError('argument pred_output_path not found in config. Please add it into config dict/file.')
with open(os.path.join(self._pred_output_path, 'predictions.json'), 'w') as writer:
for p in self._preds: for p in self._preds:
writer.write(str(p)+'\n') print(p)
print('Predictions saved at '+os.path.join(self._pred_output_path, 'predictions.json')) else:
with open(os.path.join(output_dir, 'predictions.json'), 'w') as writer:
for p in self._preds:
writer.write(str(p)+'\n')
print('Predictions saved at '+os.path.join(output_dir, 'predictions.json'))
def epoch_postprocess(self, post_inputs, output_dir=None):
# there is no post_inputs needed and not declared in epoch_inputs_attrs, hence no elements exist in post_inputs
if not self._is_training:
if output_dir is None:
raise ValueError('argument output_dir not found in config. Please add it into config dict/file.')
with open(os.path.join(output_dir, 'predictions.json'), 'w') as writer:
for i in range(len(self._preds)):
label = 0 if self._preds[i][0] > self._preds[i][1] else 1
result = {'index': i, 'label': label, 'logits': self._preds[i], 'probs': self._preds[i]}
result = json.dumps(result)
writer.write(result+'\n')
print('Predictions saved at '+os.path.join(output_dir, 'predictions.json'))
...@@ -13,41 +13,66 @@ ...@@ -13,41 +13,66 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import paddle.fluid as fluid import paddle.fluid as fluid
from paddle.fluid import layers from paddle.fluid import layers
from paddlepalm.interface import task_paradigm from paddlepalm.head.base_head import Head
import numpy as np import numpy as np
import os import os
import json
def computeHingeLoss(pos, neg, margin):
loss_part1 = fluid.layers.elementwise_sub(
fluid.layers.fill_constant_batch_size_like(
input=pos, shape=[-1, 1], value=margin, dtype='float32'), pos)
loss_part2 = fluid.layers.elementwise_add(loss_part1, neg)
loss_part3 = fluid.layers.elementwise_max(
fluid.layers.fill_constant_batch_size_like(
input=loss_part2, shape=[-1, 1], value=0.0, dtype='float32'), loss_part2)
return loss_part3
class TaskParadigm(task_paradigm):
class Match(Head):
''' '''
matching matching
''' '''
def __init__(self, config, phase, backbone_config=None):
def __init__(self, num_classes, input_dim, dropout_prob=0.0, param_initializer_range=0.02, \
learning_strategy='pointwise', margin=0.5, phase='train'):
"""
Args:
phase: train, eval, pred
lang: en, ch, ...
learning_strategy: pointwise, pairwise
"""
self._is_training = phase == 'train' self._is_training = phase == 'train'
self._hidden_size = backbone_config['hidden_size'] self._hidden_size = input_dim
self._num_classes = num_classes
if 'initializer_range' in config: self._dropout_prob = dropout_prob if phase == 'train' else 0.0
self._param_initializer = config['initializer_range'] self._param_initializer = fluid.initializer.TruncatedNormal(
else: scale=param_initializer_range)
self._param_initializer = fluid.initializer.TruncatedNormal( self._learning_strategy = learning_strategy
scale=backbone_config.get('initializer_range', 0.02)) self._margin = margin
if 'dropout_prob' in config:
self._dropout_prob = config['dropout_prob']
else:
self._dropout_prob = backbone_config.get('hidden_dropout_prob', 0.0)
self._pred_output_path = config.get('pred_output_path', None)
self._preds = [] self._preds = []
self._preds_logits = []
@property @property
def inputs_attrs(self): def inputs_attrs(self):
if self._is_training: reader = {}
reader = {"label_ids": [[-1, 1], 'int64']}
else:
reader = {}
bb = {"sentence_pair_embedding": [[-1, self._hidden_size], 'float32']} bb = {"sentence_pair_embedding": [[-1, self._hidden_size], 'float32']}
if self._is_training:
if self._learning_strategy == 'pointwise':
reader["label_ids"] = [[-1], 'int64']
elif self._learning_strategy == 'pairwise':
bb["sentence_pair_embedding_neg"] = [[-1, self._hidden_size], 'float32']
return {'reader': reader, 'backbone': bb} return {'reader': reader, 'backbone': bb}
@property @property
...@@ -55,51 +80,110 @@ class TaskParadigm(task_paradigm): ...@@ -55,51 +80,110 @@ class TaskParadigm(task_paradigm):
if self._is_training: if self._is_training:
return {"loss": [[1], 'float32']} return {"loss": [[1], 'float32']}
else: else:
return {"logits": [[-1, 2], 'float32']} if self._learning_strategy=='paiwise':
return {"probs": [[-1, 1], 'float32']}
else:
return {"logits": [[-1, self._num_classes], 'float32'],
"probs": [[-1, self._num_classes], 'float32']}
def build(self, inputs, scope_name=""): def build(self, inputs, scope_name=""):
if self._is_training:
labels = inputs["reader"]["label_ids"]
cls_feats = inputs["backbone"]["sentence_pair_embedding"]
# inputs
cls_feats = inputs["backbone"]["sentence_pair_embedding"]
if self._is_training: if self._is_training:
cls_feats = fluid.layers.dropout( cls_feats = fluid.layers.dropout(
x=cls_feats, x=cls_feats,
dropout_prob=self._dropout_prob, dropout_prob=self._dropout_prob,
dropout_implementation="upscale_in_train") dropout_implementation="upscale_in_train")
if self._learning_strategy == 'pairwise':
cls_feats_neg = inputs["backbone"]["sentence_pair_embedding_neg"]
cls_feats_neg = fluid.layers.dropout(
x=cls_feats_neg,
dropout_prob=self._dropout_prob,
dropout_implementation="upscale_in_train")
elif self._learning_strategy == 'pointwise':
labels = inputs["reader"]["label_ids"]
# loss
# for pointwise
if self._learning_strategy == 'pointwise':
logits = fluid.layers.fc(
input=cls_feats,
size=self._num_classes,
param_attr=fluid.ParamAttr(
name=scope_name+"cls_out_w",
initializer=self._param_initializer),
bias_attr=fluid.ParamAttr(
name=scope_name+"cls_out_b",
initializer=fluid.initializer.Constant(0.)))
probs = fluid.layers.softmax(logits)
if self._is_training:
ce_loss = fluid.layers.cross_entropy(
input=probs, label=labels)
loss = fluid.layers.mean(x=ce_loss)
return {'loss': loss}
# for pred
else:
return {'logits': logits,
'probs': probs}
# for pairwise
elif self._learning_strategy == 'pairwise':
pos_score = fluid.layers.fc(
input=cls_feats,
size=1,
act = "sigmoid",
param_attr=fluid.ParamAttr(
name=scope_name+"cls_out_w_pr",
initializer=self._param_initializer),
bias_attr=fluid.ParamAttr(
name=scope_name+"cls_out_b_pr",
initializer=fluid.initializer.Constant(0.)))
pos_score = fluid.layers.reshape(x=pos_score, shape=[-1, 1], inplace=True)
logits = fluid.layers.fc( if self._is_training:
input=cls_feats, neg_score = fluid.layers.fc(
size=2, input=cls_feats_neg,
param_attr=fluid.ParamAttr( size=1,
name=scope_name+"cls_out_w", act = "sigmoid",
initializer=self._param_initializer), param_attr=fluid.ParamAttr(
bias_attr=fluid.ParamAttr( name=scope_name+"cls_out_w_pr",
name=scope_name+"cls_out_b", initializer=self._param_initializer),
initializer=fluid.initializer.Constant(0.))) bias_attr=fluid.ParamAttr(
name=scope_name+"cls_out_b_pr",
initializer=fluid.initializer.Constant(0.)))
neg_score = fluid.layers.reshape(x=neg_score, shape=[-1, 1], inplace=True)
loss = fluid.layers.mean(computeHingeLoss(pos_score, neg_score, self._margin))
return {'loss': loss}
# for pred
else:
return {'probs': pos_score}
if self._is_training:
ce_loss, probs = fluid.layers.softmax_with_cross_entropy(
logits=logits, label=labels, return_softmax=True)
loss = fluid.layers.mean(x=ce_loss)
return {'loss': loss}
else:
return {'logits': logits}
def postprocess(self, rt_outputs): def batch_postprocess(self, rt_outputs):
if not self._is_training: if not self._is_training:
logits = rt_outputs['logits'] probs = []
preds = np.argmax(logits, -1) logits = []
self._preds.extend(preds.tolist()) probs = rt_outputs['probs']
self._preds.extend(probs.tolist())
def epoch_postprocess(self, post_inputs): if self._learning_strategy == 'pointwise':
logits = rt_outputs['logits']
self._preds_logits.extend(logits.tolist())
def epoch_postprocess(self, post_inputs, output_dir=None):
# there is no post_inputs needed and not declared in epoch_inputs_attrs, hence no elements exist in post_inputs # there is no post_inputs needed and not declared in epoch_inputs_attrs, hence no elements exist in post_inputs
if not self._is_training: if not self._is_training:
if self._pred_output_path is None: if output_dir is None:
raise ValueError('argument pred_output_path not found in config. Please add it into config dict/file.') raise ValueError('argument output_dir not found in config. Please add it into config dict/file.')
with open(os.path.join(self._pred_output_path, 'predictions.json'), 'w') as writer: with open(os.path.join(output_dir, 'predictions.json'), 'w') as writer:
for p in self._preds: for i in range(len(self._preds)):
writer.write(str(p)+'\n') if self._learning_strategy == 'pointwise':
print('Predictions saved at '+os.path.join(self._pred_output_path, 'predictions.json')) label = 0 if self._preds[i][0] > self._preds[i][1] else 1
result = {'index': i, 'label': label, 'logits': self._preds_logits[i], 'probs': self._preds[i]}
elif self._learning_strategy == 'pairwise':
label = 0 if self._preds[i][0] < 0.5 else 1
result = {'index': i, 'label': label, 'probs': self._preds[i][0]}
result = json.dumps(result, ensure_ascii=False)
writer.write(result+'\n')
print('Predictions saved at '+os.path.join(output_dir, 'predictions.json'))
...@@ -14,30 +14,39 @@ ...@@ -14,30 +14,39 @@
# limitations under the License. # limitations under the License.
import paddle.fluid as fluid import paddle.fluid as fluid
from paddlepalm.interface import task_paradigm from paddlepalm.head.base_head import Head
from paddle.fluid import layers from paddle.fluid import layers
import numpy as np
import os
from paddlepalm.backbone.utils.transformer import pre_process_layer from paddlepalm.backbone.utils.transformer import pre_process_layer
class TaskParadigm(task_paradigm): class MaskLM(Head):
''' '''
matching mlm
''' '''
def __init__(self, config, phase, backbone_config=None): def __init__(self, input_dim, vocab_size, hidden_act, initializer_range, dropout_prob=0.0, \
param_initializer_range=0.02, phase='train'):
self._is_training = phase == 'train' self._is_training = phase == 'train'
self._emb_size = backbone_config['hidden_size'] self._emb_size = input_dim
self._hidden_size = backbone_config['hidden_size'] self._hidden_size = input_dim
self._vocab_size = backbone_config['vocab_size'] self._dropout_prob = dropout_prob if phase == 'train' else 0.0
self._hidden_act = backbone_config['hidden_act'] self._param_initializer = fluid.initializer.TruncatedNormal(
self._initializer_range = backbone_config['initializer_range'] scale=param_initializer_range)
self._preds = []
self._vocab_size = vocab_size
self._hidden_act = hidden_act
self._initializer_range = initializer_range
@property @property
def inputs_attrs(self): def inputs_attrs(self):
reader = { reader = {
"mask_label": [[-1, 1], 'int64'], "token_ids":[[-1, -1], 'int64'],
"mask_pos": [[-1, 1], 'int64']} "mask_label": [[-1], 'int64'],
"mask_pos": [[-1], 'int64'],
}
if not self._is_training: if not self._is_training:
del reader['mask_label'] del reader['mask_label']
del reader['batchsize_x_seqlen']
bb = { bb = {
"encoder_outputs": [[-1, -1, self._hidden_size], 'float32'], "encoder_outputs": [[-1, -1, self._hidden_size], 'float32'],
"embedding_table": [[-1, self._vocab_size, self._emb_size], 'float32']} "embedding_table": [[-1, self._vocab_size, self._emb_size], 'float32']}
...@@ -54,7 +63,13 @@ class TaskParadigm(task_paradigm): ...@@ -54,7 +63,13 @@ class TaskParadigm(task_paradigm):
mask_pos = inputs["reader"]["mask_pos"] mask_pos = inputs["reader"]["mask_pos"]
if self._is_training: if self._is_training:
mask_label = inputs["reader"]["mask_label"] mask_label = inputs["reader"]["mask_label"]
max_position = inputs["reader"]["batchsize_x_seqlen"] - 1 l1 = fluid.layers.shape(inputs["reader"]["token_ids"] )[0]
# bxs = inputs["reader"]["token_ids"].shape[2].value
l2 = fluid.layers.shape(inputs["reader"]["token_ids"][0])[0]
bxs = (l1*l2).astype(np.int64)
# max_position = inputs["reader"]["batchsize_x_seqlen"] - 1
max_position = bxs - 1
mask_pos = fluid.layers.elementwise_min(mask_pos, max_position) mask_pos = fluid.layers.elementwise_min(mask_pos, max_position)
mask_pos.stop_gradient = True mask_pos.stop_gradient = True
...@@ -100,11 +115,31 @@ class TaskParadigm(task_paradigm): ...@@ -100,11 +115,31 @@ class TaskParadigm(task_paradigm):
is_bias=True) is_bias=True)
if self._is_training: if self._is_training:
mask_lm_loss = fluid.layers.softmax_with_cross_entropy( inputs = fluid.layers.softmax(fc_out)
logits=fc_out, label=mask_label) mask_lm_loss = fluid.layers.cross_entropy(
input=inputs, label=mask_label)
loss = fluid.layers.mean(mask_lm_loss) loss = fluid.layers.mean(mask_lm_loss)
return {'loss': loss} return {'loss': loss}
else: else:
return {'logits': fc_out} return {'logits': fc_out}
def batch_postprocess(self, rt_outputs):
if not self._is_training:
logits = rt_outputs['logits']
preds = np.argmax(logits, -1)
self._preds.extend(preds.tolist())
return preds
def epoch_postprocess(self, post_inputs, output_dir=None):
# there is no post_inputs needed and not declared in epoch_inputs_attrs, hence no elements exist in post_inputs
if not self._is_training:
if output_dir is None:
for p in self._preds:
print(p)
else:
with open(os.path.join(output_dir, 'predictions.json'), 'w') as writer:
for p in self._preds:
writer.write(str(p)+'\n')
print('Predictions saved at '+os.path.join(output_dir, 'predictions.json'))
...@@ -14,7 +14,7 @@ ...@@ -14,7 +14,7 @@
# limitations under the License. # limitations under the License.
import paddle.fluid as fluid import paddle.fluid as fluid
from paddlepalm.interface import task_paradigm from paddlepalm.head.base_head import Head
import collections import collections
import numpy as np import numpy as np
import os import os
...@@ -26,34 +26,37 @@ import json ...@@ -26,34 +26,37 @@ import json
RawResult = collections.namedtuple("RawResult", RawResult = collections.namedtuple("RawResult",
["unique_id", "start_logits", "end_logits"]) ["unique_id", "start_logits", "end_logits"])
class TaskParadigm(task_paradigm): class MRC(Head):
"""""" """
Machine Reading Comprehension
"""
def __init__(self, max_query_len, input_dim, pred_output_path=None, verbose=False, with_negative=False, do_lower_case=False, max_ans_len=None, null_score_diff_threshold=0.0, n_best_size=20, phase='train'):
def __init__(self, config, phase, backbone_config=None):
self._is_training = phase == 'train' self._is_training = phase == 'train'
self._max_sequence_length = config['max_seq_len'] self._hidden_size = input_dim
self._hidden_size = backbone_config['hidden_size'] self._max_sequence_length = max_query_len
self._pred_results = [] self._pred_results = []
if phase == 'pred': output_dir = pred_output_path
self._max_answer_length = config.get('max_answer_len', None) self._max_answer_length = max_ans_len
self._null_score_diff_threshold = config.get('null_score_diff_threshold', 0.0) self._null_score_diff_threshold = null_score_diff_threshold
self._n_best_size = config.get('n_best_size', 20) self._n_best_size = n_best_size
self._pred_output_path = config.get('pred_output_path', None) output_dir = pred_output_path
self._verbose = config.get('verbose', False) self._verbose = verbose
self._with_negative = config.get('with_negative', False) self._with_negative = with_negative
self._do_lower_case = config.get('do_lower_case', False) self._do_lower_case = do_lower_case
@property @property
def inputs_attrs(self): def inputs_attrs(self):
if self._is_training: if self._is_training:
reader = {"start_positions": [[-1, 1], 'int64'], reader = {"start_positions": [[-1], 'int64'],
"end_positions": [[-1, 1], 'int64'], "end_positions": [[-1], 'int64'],
} }
else: else:
reader = {'unique_ids': [[-1, 1], 'int64']} reader = {'unique_ids': [[-1], 'int64']}
bb = {"encoder_outputs": [[-1, -1, self._hidden_size], 'float32']} bb = {"encoder_outputs": [[-1, -1, self._hidden_size], 'float32']}
return {'reader': reader, 'backbone': bb} return {'reader': reader, 'backbone': bb}
...@@ -70,21 +73,26 @@ class TaskParadigm(task_paradigm): ...@@ -70,21 +73,26 @@ class TaskParadigm(task_paradigm):
else: else:
return {'start_logits': [[-1, -1, 1], 'float32'], return {'start_logits': [[-1, -1, 1], 'float32'],
'end_logits': [[-1, -1, 1], 'float32'], 'end_logits': [[-1, -1, 1], 'float32'],
'unique_ids': [[-1, 1], 'int64']} 'unique_ids': [[-1], 'int64']}
def build(self, inputs, scope_name=""): def build(self, inputs, scope_name=""):
if self._is_training: if self._is_training:
start_positions = inputs['reader']['start_positions'] start_positions = inputs['reader']['start_positions']
end_positions = inputs['reader']['end_positions'] end_positions = inputs['reader']['end_positions']
max_position = inputs["reader"]["seqlen"] - 1 # max_position = inputs["reader"]["seqlen"] - 1
start_positions = fluid.layers.elementwise_min(start_positions, max_position) # start_positions = fluid.layers.elementwise_min(start_positions, max_position)
end_positions = fluid.layers.elementwise_min(end_positions, max_position) # end_positions = fluid.layers.elementwise_min(end_positions, max_position)
start_positions.stop_gradient = True start_positions.stop_gradient = True
end_positions.stop_gradient = True end_positions.stop_gradient = True
else: else:
unique_id = inputs['reader']['unique_ids'] unique_id = inputs['reader']['unique_ids']
# It's used to help fetch variable 'unique_ids' that will be removed in the future
helper_constant = fluid.layers.fill_constant(shape=[1], value=1, dtype='int64')
fluid.layers.elementwise_mul(unique_id, helper_constant)
enc_out = inputs['backbone']['encoder_outputs'] enc_out = inputs['backbone']['encoder_outputs']
logits = fluid.layers.fc( logits = fluid.layers.fc(
input=enc_out, input=enc_out,
...@@ -100,9 +108,11 @@ class TaskParadigm(task_paradigm): ...@@ -100,9 +108,11 @@ class TaskParadigm(task_paradigm):
start_logits, end_logits = fluid.layers.unstack(x=logits, axis=0) start_logits, end_logits = fluid.layers.unstack(x=logits, axis=0)
def _compute_single_loss(logits, positions): def _compute_single_loss(logits, positions):
"""Compute start/end loss for mrc model""" """Compute start/en
loss = fluid.layers.softmax_with_cross_entropy( d loss for mrc model"""
logits=logits, label=positions) inputs = fluid.layers.softmax(logits)
loss = fluid.layers.cross_entropy(
input=inputs, label=positions)
loss = fluid.layers.mean(x=loss) loss = fluid.layers.mean(x=loss)
return loss return loss
...@@ -117,10 +127,10 @@ class TaskParadigm(task_paradigm): ...@@ -117,10 +127,10 @@ class TaskParadigm(task_paradigm):
'unique_ids': unique_id} 'unique_ids': unique_id}
def postprocess(self, rt_outputs): def batch_postprocess(self, rt_outputs):
"""this func will be called after each step(batch) of training/evaluating/predicting process.""" """this func will be called after each step(batch) of training/evaluating/predicting process."""
if not self._is_training: if not self._is_training:
unique_ids = np.squeeze(rt_outputs['unique_ids'], -1) unique_ids = rt_outputs['unique_ids']
start_logits = rt_outputs['start_logits'] start_logits = rt_outputs['start_logits']
end_logits = rt_outputs['end_logits'] end_logits = rt_outputs['end_logits']
for idx in range(len(unique_ids)): for idx in range(len(unique_ids)):
...@@ -139,19 +149,19 @@ class TaskParadigm(task_paradigm): ...@@ -139,19 +149,19 @@ class TaskParadigm(task_paradigm):
start_logits=s, start_logits=s,
end_logits=e)) end_logits=e))
def epoch_postprocess(self, post_inputs): def epoch_postprocess(self, post_inputs, output_dir=None):
"""(optional interface) this func will be called after evaluation/predicting process and each epoch during training process.""" """(optional interface) this func will be called after evaluation/predicting process and each epoch during training process."""
if not self._is_training: if not self._is_training:
if self._pred_output_path is None: if output_dir is None:
raise ValueError('argument pred_output_path not found in config. Please add it into config dict/file.') raise ValueError('argument output_dir not found in config. Please add it into config dict/file.')
examples = post_inputs['reader']['examples'] examples = post_inputs['reader']['examples']
features = post_inputs['reader']['features'] features = post_inputs['reader']['features']
if not os.path.exists(self._pred_output_path): if not os.path.exists(output_dir):
os.makedirs(self._pred_output_path) os.makedirs(output_dir)
output_prediction_file = os.path.join(self._pred_output_path, "predictions.json") output_prediction_file = os.path.join(output_dir, "predictions.json")
output_nbest_file = os.path.join(self._pred_output_path, "nbest_predictions.json") output_nbest_file = os.path.join(output_dir, "nbest_predictions.json")
output_null_log_odds_file = os.path.join(self._pred_output_path, "null_odds.json") output_null_log_odds_file = os.path.join(output_dir, "null_odds.json")
_write_predictions(examples, features, self._pred_results, _write_predictions(examples, features, self._pred_results,
self._n_best_size, self._max_answer_length, self._n_best_size, self._max_answer_length,
self._do_lower_case, output_prediction_file, self._do_lower_case, output_prediction_file,
...@@ -194,8 +204,9 @@ def _write_predictions(all_examples, all_features, all_results, n_best_size, ...@@ -194,8 +204,9 @@ def _write_predictions(all_examples, all_features, all_results, n_best_size,
# keep track of the minimum score of null start+end of position 0 # keep track of the minimum score of null start+end of position 0
score_null = 1000000 # large and positive score_null = 1000000 # large and positive
min_null_feature_index = 0 # the paragraph slice with min mull score min_null_feature_index = 0 # the paragraph slice with min mull score
null_start_logit = 0 # the start logit at the slice with min null score ull_start_logit = 0 # the start logit at the slice with min null score
null_end_logit = 0 # the end logit at the slice with min null score null_end_logit = 0 # the end logit at the slice with min null score
for (feature_index, feature) in enumerate(features): for (feature_index, feature) in enumerate(features):
result = unique_id_to_result[feature.unique_id] result = unique_id_to_result[feature.unique_id]
start_indexes = _get_best_indexes(result.start_logits, n_best_size) start_indexes = _get_best_indexes(result.start_logits, n_best_size)
...@@ -349,14 +360,14 @@ def _write_predictions(all_examples, all_features, all_results, n_best_size, ...@@ -349,14 +360,14 @@ def _write_predictions(all_examples, all_features, all_results, n_best_size,
all_nbest_json[example.qas_id] = nbest_json all_nbest_json[example.qas_id] = nbest_json
with open(output_prediction_file, "w") as writer: with open(output_prediction_file, "w") as writer:
writer.write(json.dumps(all_predictions, indent=4) + "\n") writer.write(json.dumps(all_predictions, indent=4, ensure_ascii=False) + "\n")
with open(output_nbest_file, "w") as writer: with open(output_nbest_file, "w") as writer:
writer.write(json.dumps(all_nbest_json, indent=4) + "\n") writer.write(json.dumps(all_nbest_json, indent=4, ensure_ascii=False) + "\n")
if with_negative: if with_negative:
with open(output_null_log_odds_file, "w") as writer: with open(output_null_log_odds_file, "w") as writer:
writer.write(json.dumps(scores_diff_json, indent=4) + "\n") writer.write(json.dumps(scores_diff_json, indent=4, ensure_ascii=False) + "\n")
def _get_final_text(pred_text, orig_text, do_lower_case, verbose): def _get_final_text(pred_text, orig_text, do_lower_case, verbose):
......
...@@ -15,38 +15,44 @@ ...@@ -15,38 +15,44 @@
import paddle.fluid as fluid import paddle.fluid as fluid
from paddle.fluid import layers from paddle.fluid import layers
from paddlepalm.interface import task_paradigm from paddlepalm.head.base_head import Head
import numpy as np import numpy as np
import os import os
import math
class TaskParadigm(task_paradigm): class SequenceLabel(Head):
''' '''
classification Sequence label
''' '''
def __init__(self, config, phase, backbone_config=None): def __init__(self, num_classes, input_dim, dropout_prob=0.0, learning_rate=1e-3, \
param_initializer_range=0.02, phase='train'):
"""
Args:
phase: train, eval, pred
lang: en, ch, ...
"""
self._is_training = phase == 'train' self._is_training = phase == 'train'
self._hidden_size = backbone_config['hidden_size'] self._hidden_size = input_dim
self.num_classes = config['n_classes']
self.num_classes = num_classes
if 'initializer_range' in config: self._dropout_prob = dropout_prob if phase == 'train' else 0.0
self._param_initializer = config['initializer_range'] self._param_initializer = fluid.initializer.TruncatedNormal(
else: scale=param_initializer_range)
self._param_initializer = fluid.initializer.TruncatedNormal(
scale=backbone_config.get('initializer_range', 0.02)) self.learning_rate = learning_rate
if 'dropout_prob' in config:
self._dropout_prob = config['dropout_prob']
else:
self._dropout_prob = backbone_config.get('hidden_dropout_prob', 0.0)
self._pred_output_path = config.get('pred_output_path', None)
self._preds = [] self._preds = []
@property @property
def inputs_attrs(self): def inputs_attrs(self):
reader = {}
bb = {"encoder_outputs": [[-1, -1, -1], 'float32']}
if self._is_training: if self._is_training:
reader = {"label_ids": [[-1, 1], 'int64']} reader["label_ids"] = [[-1, -1], 'int64']
else: reader["seq_lens"] = [[-1], 'int64']
reader = {}
bb = {"sentence_embedding": [[-1, self._hidden_size], 'float32']}
return {'reader': reader, 'backbone': bb} return {'reader': reader, 'backbone': bb}
@property @property
...@@ -54,48 +60,67 @@ class TaskParadigm(task_paradigm): ...@@ -54,48 +60,67 @@ class TaskParadigm(task_paradigm):
if self._is_training: if self._is_training:
return {'loss': [[1], 'float32']} return {'loss': [[1], 'float32']}
else: else:
return {'logits': [[-1, self.num_classes], 'float32']} return {'emission': [[-1, self.num_classes], 'float32']}
def build(self, inputs, scope_name=''): def build(self, inputs, scope_name=''):
sent_emb = inputs['backbone']['sentence_embedding'] token_emb = inputs['backbone']['encoder_outputs']
if self._is_training: if self._is_training:
label_ids = inputs['reader']['label_ids'] label_ids = inputs['reader']['label_ids']
cls_feats = fluid.layers.dropout( seq_lens = inputs['reader']['seq_lens']
x=sent_emb,
dropout_prob=self._dropout_prob,
dropout_implementation="upscale_in_train")
logits = fluid.layers.fc( emission = fluid.layers.fc(
input=sent_emb,
size=self.num_classes, size=self.num_classes,
input=token_emb,
param_attr=fluid.ParamAttr( param_attr=fluid.ParamAttr(
name=scope_name+"cls_out_w", initializer=self._param_initializer,
initializer=self._param_initializer), regularizer=fluid.regularizer.L2DecayRegularizer(
regularization_coeff=1e-4)),
bias_attr=fluid.ParamAttr( bias_attr=fluid.ParamAttr(
name=scope_name+"cls_out_b", initializer=fluid.initializer.Constant(0.))) name=scope_name+"cls_out_b", initializer=fluid.initializer.Constant(0.)),
num_flatten_dims=2)
if self._is_training: if self._is_training:
loss = fluid.layers.softmax_with_cross_entropy(
logits=logits, label=label_ids) # compute loss
loss = layers.mean(loss) crf_cost = fluid.layers.linear_chain_crf(
return {"loss": loss} input=emission,
label=label_ids,
param_attr=fluid.ParamAttr(
name=scope_name+'crfw', learning_rate=self.learning_rate),
length=seq_lens)
avg_cost = fluid.layers.mean(x=crf_cost)
crf_decode = fluid.layers.crf_decoding(
input=emission,
param_attr=fluid.ParamAttr(name=scope_name+'crfw'),
length=seq_lens)
(precision, recall, f1_score, num_infer_chunks, num_label_chunks,
num_correct_chunks) = fluid.layers.chunk_eval(
input=crf_decode,
label=label_ids,
chunk_scheme="IOB",
num_chunk_types=int(math.ceil((self.num_classes - 1) / 2.0)),
seq_length=seq_lens)
chunk_evaluator = fluid.metrics.ChunkEvaluator()
chunk_evaluator.reset()
return {"loss": avg_cost}
else: else:
return {"logits":logits} return {"emission": emission}
def postprocess(self, rt_outputs): def batch_postprocess(self, rt_outputs):
if not self._is_training: if not self._is_training:
logits = rt_outputs['logits'] emission = rt_outputs['emission']
preds = np.argmax(logits, -1) preds = np.argmax(emission, -1)
self._preds.extend(preds.tolist()) self._preds.extend(preds.tolist())
def epoch_postprocess(self, post_inputs): def epoch_postprocess(self, post_inputs, output_dir=None):
# there is no post_inputs needed and not declared in epoch_inputs_attrs, hence no elements exist in post_inputs # there is no post_inputs needed and not declared in epoch_inputs_attrs, hence no elements exist in post_inputs
if not self._is_training: if not self._is_training:
if self._pred_output_path is None: if output_dir is None:
raise ValueError('argument pred_output_path not found in config. Please add it into config dict/file.') raise ValueError('argument output_dir not found in config. Please add it into config dict/file.')
with open(os.path.join(self._pred_output_path, 'predictions.json'), 'w') as writer: with open(os.path.join(output_dir, 'predictions.json'), 'w') as writer:
for p in self._preds: for p in self._preds:
writer.write(str(p)+'\n') writer.write(str(p)+'\n')
print('Predictions saved at '+os.path.join(self._pred_output_path, 'predictions.json')) print('Predictions saved at '+os.path.join(output_dir, 'predictions.json'))
# -*- coding: UTF-8 -*-
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""v1.1"""
class reader(object):
"""interface of data manager."""
def __init__(self, config):
assert isinstance(config, dict)
# @property
# def inputs_attr(self):
# """描述reader输入对象的属性,包含各个对象的名字、shape以及数据类型。当某个对象为标量数据类型(如str, int, float等)时,shape设置为空列表[],当某个对象的某个维度长度可变时,shape中的相应维度设置为-1.
# Return:
# dict类型。对各个输入对象的属性描述。例如,
# 对于文本分类任务,可能需要包含输入文本和所属标签的id
# {"text": ([], 'str'),
# "label": ([], 'int')}
# 对于标注任务,可能需要输入词序列和对应的标签
# {"tokens", ([-1], 'str'),
# "tags", ([-1], 'str')}
# 对于机器阅读理解任务,可能需要包含上下文、问题、回答、答案区域的起止位置等
# {"paragraph", ([], 'str'),
# "question", ([], 'str'),
# "start_position", ([], 'int')
# """
# raise NotImplementedError()
@property
def outputs_attr(self):
"""描述reader输出对象(被yield出的对象)的属性,包含各个对象的名字、shape以及数据类型。当某个对象为标量数据类型(如str, int, float等)时,shape设置为空列表[],当某个对象的某个维度长度可变时,shape中的相应维度设置为-1。
注意:当使用mini-batch梯度下降学习策略时,,应为常规的输入对象设置batch_size维度(一般为-1)
Return:
dict类型。对各个输入对象的属性描述。例如,
对于文本分类和匹配任务,yield的输出内容可能包含如下的对象(下游backbone和task可按需访问其中的对象)
{"token_ids": ([-1, max_len], 'int64'),
"input_ids": ([-1, max_len], 'int64'),
"segment_ids": ([-1, max_len], 'int64'),
"input_mask": ([-1, max_len], 'float32'),
"label": ([-1], 'int')}
"""
raise NotImplementedError()
# def parse_line(self):
# """框架内部使用字典描述每个样本,字典的key为inputs_attr,value为每个input对应的符合attr描述的值。
# 该函数负责将文本行解析成符合inputs_attr描述的字典类型的样本。默认的parse_line方法会读取json格式的数据集文件,数据集的每一行为json格式描述的样本。
# 用户可通过对该方法的继承改写来适配不同格式的数据集,例如csv格式甚至tfrecord文件。
# """
# raise NotImplementedError()
#
# def tokenize(self, line):
# """框架中内置了word piece tokenizer等分词器,用户可通过修改tokenizer超参数来制定使用的分词器,若内置的分词器均无法满足需求,用户可通过对该方法的继承改写来自定义分词器。
# Args:
# - line: a unicode string.
# Return:
# a list of tokens
# """
# raise NotImplementedError()
def iterator(self):
"""数据集遍历接口,注意,当数据集遍历到尾部时该接口应自动完成指针重置,即重新从数据集头部开始新的遍历。
Yield:
(dict) elements that meet the requirements in output_templete
"""
raise NotImplementedError()
@property
def num_examples(self):
"""数据集中的样本数量,即每个epoch中iterator所生成的样本数。注意,使用滑动窗口等可能导致数据集样本数发生变化的策略时,该接口应返回runtime阶段的实际样本数。"""
raise NotImplementedError()
class backbone(object):
"""interface of backbone model."""
def __init__(self, config, phase):
"""
Args:
config: dict类型。描述了 多任务配置文件+预训练模型配置文件 中定义超参数
phase: str类型。运行阶段,目前支持train和predict
"""
assert isinstance(config, dict)
@property
def inputs_attr(self):
"""描述backbone从reader处需要得到的输入对象的属性,包含各个对象的名字、shape以及数据类型。当某个对象为标量数据类型(如str, int, float等)时,shape设置为空列表[],当某个对象的某个维度长度可变时,shape中的相应维度设置为-1。
Return:
dict类型。对各个输入对象的属性描述。例如,
对于文本分类和匹配任务,bert backbone依赖的reader对象主要包含如下的对象
{"token_ids": ([-1, max_len], 'int64'),
"input_ids": ([-1, max_len], 'int64'),
"segment_ids": ([-1, max_len], 'int64'),
"input_mask": ([-1, max_len], 'float32')}"""
raise NotImplementedError()
@property
def outputs_attr(self):
"""描述backbone输出对象的属性,包含各个对象的名字、shape以及数据类型。当某个对象为标量数据类型(如str, int, float等)时,shape设置为空列表[],当某个对象的某个维度长度可变时,shape中的相应维度设置为-1。
Return:
dict类型。对各个输出对象的属性描述。例如,
对于文本分类和匹配任务,bert backbone的输出内容可能包含如下的对象
{"word_emb": ([-1, max_seqlen, word_emb_size], 'float32'),
"sentence_emb": ([-1, hidden_size], 'float32'),
"sim_vec": ([-1, hidden_size], 'float32')}"""
raise NotImplementedError()
def build(self, inputs):
"""建立backbone的计算图。将符合inputs_attr描述的静态图Variable输入映射成符合outputs_attr描述的静态图Variable输出。
Args:
inputs: dict类型。字典中包含inputs_attr中的对象名到计算图Variable的映射,inputs中至少会包含inputs_attr中定义的对象
Return:
需要输出的计算图变量,输出对象会被加入到fetch_list中,从而在每个训练/推理step时得到runtime的计算结果,该计算结果会被传入postprocess方法中供用户处理。
"""
raise NotImplementedError()
class task_paradigm(object):
def __init__(self, config, phase, backbone_config):
"""
config: dict类型。描述了 任务实例(task instance)+多任务配置文件 中定义超参数
phase: str类型。运行阶段,目前支持train和predict
"""
@property
def inputs_attrs(self):
"""描述task_layer需要从reader, backbone等输入对象集合所读取到的输入对象的属性,第一级key为对象集和的名字,如backbone,reader等(后续会支持更灵活的输入),第二级key为对象集和中各对象的属性,包括对象的名字,shape和dtype。当某个对象为标量数据类型(如str, int, float等)时,shape设置为空列表[],当某个对象的某个维度长度可变时,shape中的相应维度设置为-1。
Return:
dict类型。对各个对象集及其输入对象的属性描述。"""
raise NotImplementedError()
@property
def outputs_attr(self):
"""描述task输出对象的属性,包括对象的名字,shape和dtype。输出对象会被加入到fetch_list中,从而在每个训练/推理step时得到runtime的计算结果,该计算结果会被传入postprocess方法中供用户处理。
当某个对象为标量数据类型(如str, int, float等)时,shape设置为空列表[],当某个对象的某个维度长度可变时,shape中的相应维度设置为-1。
Return:
dict类型。对各个输入对象的属性描述。注意,训练阶段必须包含名为loss的输出对象。
"""
raise NotImplementedError()
@property
def epoch_inputs_attrs(self):
return {}
def build(self, inputs, scope_name=""):
"""建立task_layer的计算图。将符合inputs_attrs描述的来自各个对象集的静态图Variables映射成符合outputs_attr描述的静态图Variable输出。
Args:
inputs: dict类型。字典中包含inputs_attrs中的对象名到计算图Variable的映射,inputs中至少会包含inputs_attr中定义的对象
Return:
需要输出的计算图变量,输出对象会被加入到fetch_list中,从而在每个训练/推理step时得到runtime的计算结果,该计算结果会被传入postprocess方法中供用户处理。
"""
raise NotImplementedError()
def postprocess(self, rt_outputs):
"""每个训练或推理step后针对当前batch的task_layer的runtime计算结果进行相关后处理。注意,rt_outputs除了包含build方法,还自动包含了loss的计算结果。"""
pass
def epoch_postprocess(self, post_inputs):
pass
from slanted_triangular_schedualer import TriangularSchedualer from slanted_triangular_schedualer import TriangularSchedualer
from warmup_schedualer import WarmupSchedualer from warmup_schedualer import WarmupSchedualer
class BaseSchedualer(): class Schedualer():
def __init__(self): def __init__(self):
self._prog = None self._prog = None
...@@ -7,6 +7,6 @@ class BaseSchedualer(): ...@@ -7,6 +7,6 @@ class BaseSchedualer():
def _set_prog(self, prog): def _set_prog(self, prog):
self._prog = prog self._prog = prog
def build(self, learning_rate): def _build(self, learning_rate):
raise NotImplementedError() raise NotImplementedError()
# scheduled_lr = fluid.layers.learning_rate_scheduler\
# .noam_decay(1/(warmup_steps *(config['learning_rate'] ** 2)),
# warmup_steps)
from paddlepalm.lr_sched.schedualer import BaseSchedualer from paddlepalm.lr_sched.base_schedualer import Schedualer
from paddle import fluid from paddle import fluid
class TriangularSchedualer(BaseSchedualer): class TriangularSchedualer(Schedualer):
""" Applies linear warmup of learning rate from 0 to learning_rate until warmup_steps, and then decay to 0 linearly until num_train_steps.""" """ Implementation of Slanted Triangular learning rate schedual method, more details refer to https://arxiv.org/pdf/1801.06146.pdf . Apply linear warmup of learning rate from 0 to learning_rate until warmup_steps, and then decay to 0 linearly until num_train_steps."""
def __init__(self, warmup_steps, num_train_steps): def __init__(self, warmup_steps, num_train_steps):
BaseSchedualer.__init__(self) """Create a new TriangularSchedualer object.
Args:
warmup_steps: the learning rate will grow from 0 to max_learning_rate over `warmup_steps` steps.
num_train_steps: the number of train steps.
"""
Schedualer.__init__(self)
assert num_train_steps > warmup_steps > 0 assert num_train_steps > warmup_steps > 0
self.warmup_steps = warmup_steps self.warmup_steps = warmup_steps
self.num_train_steps = num_train_steps self.num_train_steps = num_train_steps
def build(self, learning_rate): def _build(self, learning_rate):
with self._prog._lr_schedule_guard(): with self._prog._lr_schedule_guard():
lr = fluid.layers.tensor.create_global_var( lr = fluid.layers.tensor.create_global_var(
shape=[1], shape=[1],
......
from paddlepalm.lr_sched.schedualer import BaseSchedualer from paddlepalm.lr_sched.base_schedualer import Schedualer
import paddle.fluid as fluid
def WarmupSchedualer(BaseSchedualer): def WarmupSchedualer(Schedualer):
""" Applies linear warmup of learning rate from 0 to learning_rate until warmup_steps, and then decay to 0 linearly until num_train_steps.""" """ Applies linear warmup of learning rate from 0 to learning_rate until warmup_steps, and then decay to 0 linearly until num_train_steps."""
def __init__(self, warmup_steps): def __init__(self, warmup_steps):
schedualer.__init__(self) schedualer.__init__(self)
self.warmup_steps = warmup_steps self.warmup_steps = warmup_steps
def build(self, learning_rate): def _build(self, learning_rate):
with self._prog._lr_schedule_guard(): with self._prog._lr_schedule_guard():
lr = fluid.layers.tensor.create_global_var( lr = fluid.layers.tensor.create_global_var(
......
# -*- coding: UTF-8 -*-
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function
import os
import sys
import importlib
import multiprocessing
from paddle import fluid
from paddle.fluid import layers
import yaml
import json
import logging
import time
import numpy as np
from paddlepalm.utils.saver import init_pretraining_params, init_checkpoint
from paddlepalm.utils.config_helper import PDConfig
from paddlepalm.utils.print_helper import print_dict
from paddlepalm.utils.reader_helper import create_net_inputs, create_iterator_fn, create_joint_iterator_fn, merge_input_attrs
from paddlepalm.distribute import data_feeder
from default_settings import *
from task_instance import TaskInstance, check_instances
DEBUG=False
VERBOSE=0
def _get_basename(f):
return os.path.splitext(f)[0]
def _get_suffix(f):
return os.path.splitext(f)[-1]
def _parse_yaml(f, asdict=True, support_cmd_line=False):
assert os.path.exists(f), "file {} not found.".format(f)
if support_cmd_line:
args = PDConfig(yaml_file=f, fuse_args=True)
args.build()
return args.asdict() if asdict else args
else:
if asdict:
with open(f, "r") as fin:
yaml_config = yaml.load(fin, Loader=yaml.SafeLoader)
return yaml_config
else:
raise NotImplementedError()
def _parse_json(f, asdict=True, support_cmd_line=False):
assert os.path.exists(f), "file {} not found.".format(f)
if support_cmd_line:
args = PDConfig(json_file=f, fuse_args=support_cmd_line)
args.build()
return args.asdict() if asdict else args
else:
if asdict:
with open(f, "r") as fin:
config = json.load(fin)
return config
else:
raise NotImplementedError()
def _parse_list(string, astype=str):
assert isinstance(string, str), "{} is not a string.".format(string)
if ',' not in string:
return [astype(string)]
string = string.replace(',', ' ')
return [astype(i) for i in string.split()]
def _try_float(s):
try:
float(s)
return(float(s))
except:
return s
def _check_conf(conf, checklist=None):
assert isinstance(conf, dict), "{} is not a dict.".format(conf)
ret = {}
for k,v in conf.items():
if isinstance(v, str):
v = _try_float(v)
ret[k] = v
if checklist is not None:
for k, t in checklist:
assert k in ret, "required argument {} is NOT exist in config file.".format(k)
assert isintance(ret[k], t), "value type of argument {} should be {}".format(k, t)
return ret
# TODO: 增加None机制,允许hidden size、batch size和seqlen设置为None
def _check_io(in_attr, out_attr, strict=False, in_name="left", out_name="right"):
for name, attr in in_attr.items():
assert name in out_attr, in_name+': '+name+' not found in '+out_name
if attr != out_attr[name]:
if strict:
raise ValueError(name+': shape or dtype not consistent!')
else:
logging.warning('{}: shape or dtype not consistent!\n{}:\n{}\n{}:\n{}'.format(name, in_name, attr, out_name, out_attr[name]))
def _merge_conf(conf1, conf2, conf1_first=True, strict=False):
assert isinstance(conf1, dict), "{} is not a dict.".format(conf1)
assert isinstance(conf2, dict), "{} is not a dict.".format(conf2)
base_conf = conf2 if conf1_first else conf1
base_conf = base_conf.copy()
new_conf = conf1 if conf1_first else conf2
for k, v in new_conf.items():
if k in base_conf:
if base_conf[k] != v:
raise Warning("value of argument {} has been updated to {}.".format(k, v))
else:
if strict:
continue
base_conf[k] = v
return base_conf
def _encode_inputs(inputs, scope_name, sep='/', cand_set=None):
outputs = {}
for k, v in inputs.items():
if cand_set is not None:
if k in cand_set:
outputs[k] = v
if scope_name+sep+k in cand_set:
outputs[scope_name+sep+k] = v
else:
outputs[scope_name+sep+k] = v
return outputs
def _decode_inputs(inputs, scope_name, sep='/', keep_unk_keys=True):
outputs = {}
for name, value in inputs.items():
# var for backbone are also available to tasks
if keep_unk_keys and sep not in name:
outputs[name] = value
# var for this inst
if name.startswith(scope_name+'/'):
outputs[name[len(scope_name+'/'):]] = value
return outputs
def _init_env(use_gpu):
if use_gpu:
place = fluid.CUDAPlace(0)
dev_count = fluid.core.get_cuda_device_count()
else:
place = fluid.CPUPlace()
dev_count = int(os.environ.get('CPU_NUM', multiprocessing.cpu_count()))
return fluid.Executor(place), dev_count
def _fit_attr(conf, fit_attr, strict=False):
for i, attr in fit_attr.items():
if i not in conf:
if strict:
raise Exception('Argument {} is required to create a controller.'.format(i))
else:
continue
conf[i] = attr(conf[i])
return conf
def create_feed_batch_process_fn(net_inputs):
def feed_batch_process_fn(data):
temp = {}
for q, var in net_inputs.items():
if isinstance(var, str) or isinstance(var, unicode):
temp[var] = data[q]
else:
temp[var.name] = data[q]
return temp
return feed_batch_process_fn
class Controller(object):
def __init__(self, config, task_dir='.', for_train=True):
"""
Args:
config: (str|dict) 字符串类型时,给出yaml格式的config配置文件路径;
"""
self._for_train = for_train
assert isinstance(config, str) or isinstance(config, dict), "a config dict or config file path is required to create a Controller."
if isinstance(config, str):
mtl_conf = _parse_yaml(config, support_cmd_line=True)
else:
mtl_conf = config
mtl_conf = _check_conf(mtl_conf)
mtl_conf = _fit_attr(mtl_conf, REQUIRED_ARGS, strict=True)
mtl_conf = _fit_attr(mtl_conf, OPTIONAL_ARGS, strict=False)
exe, dev_count = _init_env(use_gpu=mtl_conf.get('use_gpu', True))
self.exe = exe
self.dev_count = dev_count
print_dict(mtl_conf, title='global configuration')
# parse task instances and target tags
instnames = _parse_list(mtl_conf['task_instance'])
assert len(instnames) == len(set(instnames)), "repeated task_instance is NOT supported."
num_instances = len(instnames)
self.num_instances = num_instances
instname_to_conf = {}
instname_to_id = {}
for id, instname in enumerate(instnames):
instpath = os.path.join(task_dir, instname+'.yaml')
conf = _parse_yaml(instpath, support_cmd_line=False)
# conf = _check_conf(conf, TASK_INSTANCE_REQUIRED_ARGS)
conf = _check_conf(conf)
temp_conf = _merge_conf(mtl_conf, conf, strict=True)
print_dict(temp_conf, title='{} configuration'.format(instname))
conf = _merge_conf(mtl_conf, conf)
instname_to_conf[instname] = conf
instname_to_id[instname] = id
# prepare backbone
if 'backbone_config_path' in mtl_conf:
bb_conf = _parse_json(mtl_conf['backbone_config_path'])
bb_conf = _merge_conf(mtl_conf, bb_conf)
else:
bb_conf = mtl_conf
print_dict(bb_conf, title = 'backbone configuration'.format(instname))
bb_name = mtl_conf['backbone']
bb_mod = importlib.import_module(BACKBONE_DIR + '.' + bb_name)
Backbone = getattr(bb_mod, 'Model')
# create task instances
instances = []
for name in instnames:
instances.append(TaskInstance(name, instname_to_id[name], instname_to_conf[name]))
check_instances(instances)
# parse target_tag
if 'target_tag' in mtl_conf:
target_tag = str(mtl_conf['target_tag'])
tags = _parse_list(target_tag, astype=int)
assert len(tags) == len(instnames), "number of target_tag is NOT consistent with that in task_instance."
for tag, inst in zip(tags, instances):
inst.is_target = tag
else:
tags = [i.is_target for i in instances]
num_targets = sum(tags)
num_auxes = num_instances - num_targets
# parse mix ratios
if 'mix_ratio' in mtl_conf:
mix_ratio = str(mtl_conf['mix_ratio'])
mrs = _parse_list(mix_ratio, astype=float)
assert len(mrs) == num_instances, "number of mix_ratios is NOT consistent with num_instances."
else:
mrs = [1.0] * num_instances
for mr, inst in zip(mrs, instances):
inst.mix_ratio = mr
# parse task layer reuse tags
instname_to_reusehost = {i:i for i in instnames}
if 'task_reuse_tag' in mtl_conf:
tags = _parse_list(mtl_conf['task_reuse_tag'], astype=int)
assert len(tags) == num_targets, 'number of reuse_tags is NOT consistent with number of instances.'
else:
tags = []
mapper = {}
for inst in instances:
history = set()
history.add(inst.name)
cur_inst = inst
while True:
if cur_inst.task_reuse_scope in history:
mapper[inst.name] = len(tags)
break
elif cur_inst.task_reuse_scope in mapper:
mapper[inst.name] = mapper[cur_inst.task_reuse_scope]
break
else:
cur_inst = name_to_instance[cur_inst.task_reuse_scope]
history.add(cur_inst.name)
tags.append(mapper[inst.name])
for i in range(1, num_instances):
for j in range(i):
if tags[i] == tags[j]:
assert instances[i].Paradigm == \
instances[j].Paradigm, \
"paradigm of reuse tasks should be consistent"
instances[i].task_reuse_scope = instances[j].name
break
self.instances = instances
self.mrs = mrs
self.Backbone = Backbone
self.bb_conf = bb_conf
self.bb_name = bb_name
self.has_init_train = False
self.has_init_pred = False
if self._for_train:
print("initialing for training...")
self._init_train()
self.has_init_train = True
def _init_train(self):
instances = self.instances
Backbone = self.Backbone
bb_conf = self.bb_conf
bb_name = self.bb_name
dev_count = self.dev_count
num_instances = len(instances)
mrs = self.mrs
# set first_target/main task instance
main_inst = None
for inst in instances:
if inst.is_target:
main_inst = inst
inst.is_first_target = True
break
main_conf = main_inst.config
if not os.path.exists(main_conf['save_path']):
os.makedirs(main_conf['save_path'])
os.makedirs(os.path.join(main_conf['save_path'], 'ckpt'))
# prepare backbone
train_backbone = Backbone(bb_conf, phase='train')
pred_backbone = Backbone(bb_conf, phase='pred')
# create reader, task
# then check i/o across reader, backbone and task_layer
task_attrs = []
pred_task_attrs = []
for inst in instances:
train_reader = inst.Reader(inst.config, phase='train')
inst.reader['train'] = train_reader
train_parad = inst.Paradigm(inst.config, phase='train', backbone_config=bb_conf)
inst.task_layer['train'] = train_parad
task_attr_from_reader = _encode_inputs(train_parad.inputs_attrs['reader'], inst.name)
task_attrs.append(task_attr_from_reader)
_check_io(train_backbone.inputs_attr, train_reader.outputs_attr, in_name=bb_name+'_backbone', out_name='reader.train')
_check_io(train_parad.inputs_attrs['reader'], train_reader.outputs_attr, in_name='task_paradigm.train.reader', out_name='reader.train')
_check_io(train_parad.inputs_attrs['backbone'], train_backbone.outputs_attr, in_name='task_paradigm.train.backbone', out_name=bb_name+'_backbone')
if inst.is_target:
if 'pred_file' not in inst.config:
inst.config['pred_file'] = ''
pred_reader = inst.Reader(inst.config, phase='pred')
pred_parad = inst.Paradigm(inst.config, phase='pred', backbone_config=bb_conf)
inst.task_layer['pred'] = pred_parad
task_attr_from_reader = _encode_inputs(pred_parad.inputs_attrs['reader'], inst.name)
pred_task_attrs.append(task_attr_from_reader)
_check_io(pred_backbone.inputs_attr, pred_reader.outputs_attr, in_name=bb_name+'_backbone', out_name='reader.pred')
_check_io(pred_parad.inputs_attrs['reader'], pred_reader.outputs_attr, in_name='task_paradigm.pred.reader', out_name='reader.pred')
_check_io(pred_parad.inputs_attrs['backbone'], pred_backbone.outputs_attr, in_name='task_paradigm.pred.backbone', out_name=bb_name+'_backbone')
# merge reader input attrs from backbone and task_instances
joint_input_names, joint_shape_and_dtypes, name_to_position = merge_input_attrs(train_backbone.inputs_attr, task_attrs)
pred_joint_input_names, pred_joint_shape_and_dtypes, _ = merge_input_attrs(pred_backbone.inputs_attr, pred_task_attrs, insert_taskid=False, insert_batchsize=False, insert_seqlen=False, insert_batchsize_x_seqlen=False)
# shapes: [task_id, shapes_of_backbone, shapes_of_inst1, ..., shapes_of_instN]
if DEBUG:
print('----- for debug -----')
print('joint input names:')
print(joint_input_names)
print('joint input shape and dtypes:')
print(joint_shape_and_dtypes)
# load data
for inst in instances:
print(inst.name+": preparing data...", end='')
inst.reader['train'].load_data()
print('ok!')
# merge dataset iterators and create net input vars
iterators = []
prefixes = []
mrs = []
for inst in instances:
iterators.append(inst.reader['train'].iterator())
prefixes.append(inst.name)
mrs.append(inst.mix_ratio)
joint_iterator_fn = create_joint_iterator_fn(iterators, prefixes, joint_shape_and_dtypes, mrs, name_to_position, dev_count=dev_count, verbose=VERBOSE, return_type='dict')
self._joint_iterator_fn = joint_iterator_fn
input_attrs = [[i, j, k] for i, (j,k) in zip(joint_input_names, joint_shape_and_dtypes)]
pred_input_attrs = [[i, j, k] for i, (j,k) in zip(pred_joint_input_names, pred_joint_shape_and_dtypes)]
# net_inputs = create_net_inputs(input_attrs, async=True, iterator_fn=joint_iterator_fn, dev_count=dev_count, n_prefetch=3)
net_inputs = create_net_inputs(input_attrs, async=False)
self._net_inputs = net_inputs
# build backbone and task layers
train_prog = fluid.default_main_program()
train_init_prog = fluid.default_startup_program()
bb_output_vars = train_backbone.build(net_inputs, scope_name='__paddlepalm_')
assert sorted(bb_output_vars.keys()) == sorted(train_backbone.outputs_attr.keys())
pred_prog = fluid.Program()
pred_init_prog = fluid.Program()
with fluid.program_guard(main_program = pred_prog, startup_program = pred_init_prog):
pred_net_inputs = create_net_inputs(pred_input_attrs)
pred_bb_output_vars = pred_backbone.build(pred_net_inputs, scope_name='__paddlepalm_')
fluid.framework.switch_main_program(train_prog)
fluid.framework.switch_startup_program(train_init_prog)
task_output_vars = {}
for inst in instances:
task_inputs = {'backbone': bb_output_vars}
task_inputs_from_reader = _decode_inputs(net_inputs, inst.name)
task_inputs['reader'] = task_inputs_from_reader
scope = inst.task_reuse_scope + '/'
with fluid.unique_name.guard(scope):
output_vars = inst.build_task_layer(task_inputs, phase='train', scope=scope)
output_vars = {inst.name+'/'+key: val for key, val in output_vars.items()}
old = len(task_output_vars) # for debug
task_output_vars.update(output_vars)
assert len(task_output_vars) - old == len(output_vars) # for debug
# prepare predict vars for saving inference model
if inst.is_target:
with fluid.program_guard(pred_prog, pred_init_prog):
cur_inputs = _decode_inputs(pred_net_inputs, inst.name)
inst.pred_input = cur_inputs
pred_task_inputs = {'backbone': pred_bb_output_vars, 'reader': cur_inputs}
scope = inst.task_reuse_scope + '/'
with fluid.unique_name.guard(scope):
inst.build_task_layer(pred_task_inputs, phase='pred', scope=scope)
bb_fetches = {k: v.name for k,v in bb_output_vars.items()}
task_fetches = {k: v.name for k,v in task_output_vars.items()}
fetches = task_fetches
fetches['__task_id'] = net_inputs['__task_id'].name
# compute loss
task_id_var = net_inputs['__task_id']
task_id_vec = fluid.one_hot(task_id_var, num_instances)
losses = fluid.layers.concat([task_output_vars[inst.name+'/loss'] for inst in instances], axis=0)
loss = layers.reduce_sum(task_id_vec * losses)
main_reader = main_inst.reader['train']
num_examples = main_reader.num_examples
for inst in instances:
max_train_steps = int(main_conf['num_epochs']* inst.mix_ratio * (num_examples // main_conf['batch_size'] // dev_count))
if inst.is_target:
print('{}: expected train steps {}.'.format(inst.name, max_train_steps))
inst.steps_pur_epoch = inst.reader['train'].num_examples // main_conf['batch_size'] // dev_count
inst.expected_train_steps = max_train_steps
global_max_train_steps = int(main_conf['num_epochs'] * sum(mrs) * (num_examples // main_conf['batch_size'] // dev_count))
print('Estimated overall train steps {}.'.format(global_max_train_steps))
if 'warmup_proportion' in main_conf and main_conf['warmup_proportion'] > 0:
warmup_steps = int(global_max_train_steps * main_conf['warmup_proportion'])
print('Warmup steps: '+str(warmup_steps))
else:
warmup_steps = 0
# build optimizer
if 'optimizer' in main_conf:
optim_mod = importlib.import_module(OPTIMIZER_DIR + '.' + main_conf['optimizer'])
optimize = getattr(optim_mod, OPTIMIZE_METHOD)
optimize(loss, main_conf, max_train_steps, warmup_steps, fluid.default_main_program())
loss.persistable = True
if main_conf.get('use_ema', False):
assert 'ema_decay' in main_conf, "ema_decay should be set when use_ema is enabled."
ema = fluid.optimizer.ExponentialMovingAverage(main_conf['ema_decay'])
ema.update()
# prepare for train
self.train_backbone = train_backbone
self.train_program = fluid.CompiledProgram(fluid.default_main_program()).with_data_parallel(loss_name=loss.name)
self.saver_program = fluid.default_main_program()
self.main_inst = main_inst
self.fetches = fetches
self.has_init_train = True
self.has_init_pred = True
self.exe.run(fluid.default_startup_program())
print("\nRandomly initialize parameters...\n")
def _init_pred(self, instance, infer_model_path):
inst = instance
if 'pred_output_path' not in inst.config:
inst.config['pred_output_path'] = os.path.join(inst.config.get('save_path', '.'), inst.name)
if not os.path.exists(inst.config['pred_output_path']):
os.makedirs(inst.config['pred_output_path'])
pred_backbone = self.Backbone(self.bb_conf, phase='pred')
pred_parad = inst.Paradigm(inst.config, phase='pred', backbone_config=self.bb_conf)
inst.task_layer['pred'] = pred_parad
pred_joint_input_names, pred_joint_shape_and_dtypes, name_to_position = merge_input_attrs(
pred_backbone.inputs_attr, inst.task_layer['pred'].inputs_attrs['reader'],
insert_taskid=False, insert_batchsize=False, insert_seqlen=False, insert_batchsize_x_seqlen=False)
pred_prog = inst.load(infer_model_path)
pred_prog = fluid.CompiledProgram(pred_prog).with_data_parallel()
if inst.reader['pred'] is None:
pred_reader = inst.Reader(inst.config, phase='pred')
inst.reader['pred'] = pred_reader
return pred_prog
def load_pretrain(self, pretrain_path=None):
# load pretrain model (or ckpt)
if pretrain_path is None:
assert 'pretrain_path' in self.main_conf, "pretrain_path NOT set."
pretrain_path = self.main_conf['pretrain_path']
init_pretraining_params(
self.exe,
pretrain_path,
main_program=fluid.default_startup_program())
def train(self):
if not self.has_init_train:
self._init_train()
self.has_init_train = True
instances = self.instances
num_instances = self.num_instances
main_inst = self.main_inst
main_conf = main_inst.config
backbone = self.train_backbone
train_program = self.train_program
saver_program = self.saver_program
fetches = self.fetches
finish = []
for inst in instances:
if inst.is_target:
if inst.expected_train_steps > 0:
finish.append(False)
else:
finish.append(True)
print(inst.name+': train finished!')
inst.save()
def train_finish():
for inst in instances:
if inst.is_target:
if not inst.train_finish:
return False
return True
# do training
fetch_names, fetch_list = zip(*fetches.items())
main_step = 0 # only count for main task
global_step = 0 # count for all tasks
epoch = 0
time_begin = time.time()
backbone_buffer = []
feed_batch_process_fn = create_feed_batch_process_fn(self._net_inputs)
distribute_feeder = data_feeder(self._joint_iterator_fn, feed_batch_process_fn)
# palm.distribute.reader(self._joint_iterator_fn, self._net_inputs, prefetch_steps=2)
while not train_finish():
feed, mask = next(distribute_feeder)
rt_outputs = self.exe.run(train_program, feed=feed, fetch_list=fetch_list)
while mask.pop() == False:
rt_outputs.pop()
rt_outputs = {k:v for k,v in zip(fetch_names, rt_outputs)}
rt_task_id = np.squeeze(rt_outputs['__task_id']).tolist()
rt_task_id = rt_task_id[0] if isinstance(rt_task_id, list) else rt_task_id
cur_task = instances[rt_task_id]
backbone_rt_outputs = {k:v for k,v in rt_outputs.items() if '/' not in k}
backbone_buffer.append(backbone.postprocess(backbone_rt_outputs))
task_rt_outputs = {k[len(cur_task.name+'/'):]: v for k,v in rt_outputs.items() if k.startswith(cur_task.name+'/')}
instances[rt_task_id].task_layer['train'].postprocess(task_rt_outputs)
global_step += 1
cur_task.cur_train_step += 1
cur_task_global_step = cur_task.cur_train_step + cur_task.cur_train_epoch * cur_task.steps_pur_epoch
if cur_task.is_target and cur_task.save_infermodel_every_n_steps > 0 and cur_task_global_step % cur_task.save_infermodel_every_n_steps == 0:
cur_task.save(suffix='.step'+str(cur_task_global_step))
if global_step % main_conf.get('print_every_n_steps', 5) == 0:
loss = rt_outputs[cur_task.name+'/loss']
loss = np.mean(np.squeeze(loss)).tolist()
time_end = time.time()
time_cost = time_end - time_begin
print("Global step: {}. Task: {}, step {}/{} (epoch {}), loss: {:.3f}, speed: {:.2f} steps/s".format(
global_step, cur_task.name, cur_task.cur_train_step, cur_task.steps_pur_epoch, cur_task.cur_train_epoch,
loss, main_conf.get('print_every_n_steps', 5) / time_cost))
time_begin = time.time()
if cur_task.train_finish and cur_task.cur_train_step + cur_task.cur_train_epoch * cur_task.steps_pur_epoch == cur_task.expected_train_steps:
print(cur_task.name+': train finished!')
cur_task.save()
if 'save_ckpt_every_n_steps' in main_conf and global_step % main_conf['save_ckpt_every_n_steps'] == 0:
save_path = os.path.join(main_conf['save_path'], 'ckpt',
"step_" + str(global_step))
fluid.io.save_persistables(self.exe, save_path, saver_program)
print('checkpoint has been saved at '+save_path)
save_path = os.path.join(main_conf['save_path'], 'ckpt',
"step_" + str(global_step))
fluid.io.save_persistables(self.exe, save_path, saver_program)
print('checkpoint has been saved at '+save_path)
print("ALL tasks train finished, exiting...")
def pred(self, task_instance, inference_model_dir=None):
if self._for_train:
raise Exception('This controller is a trainer. Please build a new controller with for_train=False for predicting.')
assert isinstance(task_instance, str)
if isinstance(inference_model_dir, str):
assert os.path.exists(inference_model_dir), inference_model_dir+" not found."
# if not self.has_init_pred and inference_model_dir is None:
# raise ValueError('infer_model_path is required for prediction.')
if inference_model_dir is None:
assert 'save_path' in self.mtl_conf, "one of the `inference_model_dir` and 'save_path' should be set to load inference model."
inference_model_dir = os.path.join(self.mtl_conf['save_path'], task_instance, 'infer_model')
instance = None
for inst in self.instances:
if inst.name == task_instance:
instance = inst
break
if instance is None:
raise ValueError(task_instance + ' is not a valid task_instance.')
pred_prog = self._init_pred(instance, inference_model_dir)
inst = instance
print(inst.name+": loading data...")
inst.reader['pred'].load_data()
fetch_names, fetch_vars = inst.pred_fetch_list
print('predicting...')
feed_batch_process_fn = create_feed_batch_process_fn(inst.pred_input)
distribute_feeder = data_feeder(inst.reader['pred'].iterator, feed_batch_process_fn, prefetch_steps=1)
buf = []
for feed, mask in distribute_feeder:
print('before run')
rt_outputs = self.exe.run(pred_prog, feed, fetch_vars)
print('after run')
splited_rt_outputs = []
for item in rt_outputs:
splited_rt_outputs.append(np.split(item, len(mask)))
# assert len(rt_outputs) == len(mask), [len(rt_outputs), len(mask)]
print(mask)
while mask.pop() == False:
print(mask)
for item in splited_rt_outputs:
item.pop()
rt_outputs = []
print('cancat')
for item in splited_rt_outputs:
rt_outputs.append(np.concatenate(item))
rt_outputs = {k:v for k,v in zip(fetch_names, rt_outputs)}
inst.postprocess(rt_outputs, phase='pred')
print('leave feeder')
if inst.task_layer['pred'].epoch_inputs_attrs:
reader_outputs = inst.reader['pred'].get_epoch_outputs()
else:
reader_outputs = None
print('epoch postprocess')
inst.epoch_postprocess({'reader':reader_outputs}, phase='pred')
if __name__ == '__main__':
assert len(sys.argv) == 2, "Usage: python mtl_controller.py <mtl_conf_path>"
conf_path = sys.argv[1]
del sys.argv[1]
controller = Controller(conf_path)
if controller.main_conf['do_train']:
controller.train()
__all__ = ["Controller"]
from paddle import fluid
from paddle.fluid import layers
from paddlepalm.distribute import gpu_dev_count, cpu_dev_count
from paddlepalm import Trainer
from paddlepalm.utils import reader_helper
import numpy as np
import time
dev_count = 1 if gpu_dev_count <= 1 else gpu_dev_count
VERBOSE=False
class MultiHeadTrainer(Trainer):
"""
The core unit to start a multi-task training/predicting session. A MultiHeadTrainer is built based on several Trainers. Beyond the inheritance of Trainer, it additionally achieves model backbone reuse across tasks, trainer sampling for multi-task learning, and multi-head inference for effective evaluation and prediction.
"""
def __init__(self, trainers):
"""Create a new multi_head_trainer.
Args:
trainers: a list of Trainer objects.
"""
# if reuse_flags is not None:
# assert len(reuse_flags) == len(trainers)
Trainer.__init__(self, '')
self._trainers = trainers
name_maxlen = max([len(i.name) for i in self._trainers])
self._name_pads = {i.name: name_maxlen-len(i.name) for i in self._trainers}
self._train_init = False
self._predict_init = False
self._feeded_var_names = None
self._cur_train_step = 0
self._target_vars = None
self._inputname_to_varname = {}
self._pred_input_name_list = []
self._pred_input_varname_list = []
self._pred_fetch_name_list = []
self._pred_fetch_var_list = []
self._exe = None
self._save_protocol = {
'input_names': 'self._pred_input_name_list',
'input_varnames': 'self._pred_input_varname_list',
'fetch_list': 'self._pred_fetch_name_list'}
self._check_save = lambda: False
for t in self._trainers:
t._set_multitask()
def build_forward(self, backbone, heads):
"""
Build forward computation graph for training, which usually built from input layer to loss node.
Args:
backbone: a Backbone object with phase == 'train', which is used to extract multi-level text features, e.g., contextual word embedding and sentence embedding.
heads: a list of Head objects. Phase of each head should be set as 'train', which is used to build task specific output layers.
Return:
- loss_var: a Variable object. The computational graph variable(node) of loss.
"""
if isinstance(heads, list):
head_dict = {k.name: v for k,v in zip(self._trainers, heads)}
elif isinstance(heads, dict):
head_dict = heads
else:
raise ValueError()
num_heads = len(self._trainers)
assert len(head_dict) == num_heads
for t in self._trainers:
assert t.name in head_dict, "expected: {}, exists: {}".format(t.name, head_dict.keys())
train_prog = fluid.Program()
train_init_prog = fluid.Program()
self._train_prog = train_prog
self._train_init_prog = train_init_prog
def get_loss(i):
head = head_dict[self._trainers[i].name]
# loss_var = self._trainers[i].build_forward(backbone, head, train_prog, train_init_prog)
loss_var = self._trainers[i].build_forward(backbone, head)
return loss_var
# task_fns = {}
# for i in range(num_heads):
# def task_loss():
# task_id = i
# return lambda: get_loss(task_id)
# task_fns[i] = task_loss()
# task_fns = {i: lambda: get_loss(i) for i in range(num_heads)}
task_fns = {i: lambda i=i: get_loss(i) for i in range(num_heads)}
with fluid.program_guard(train_prog, train_init_prog):
task_id_var = fluid.data(name="__task_id",shape=[1],dtype='int64')
# task_id_var = fluid.layers.fill_constant(shape=[1],dtype='int64', value=1)
# print(task_id_var.name)
loss_var = layers.switch_case(
branch_index=task_id_var,
branch_fns=task_fns
)
self._task_id_var = task_id_var
self._loss_var = loss_var
self._fetch_list = [loss_var.name]
# for b in train_prog.blocks:
# for var in b.vars:
# pass
# if 'task_id' in var:
# print(var)
# exit()
# print(var)
if not self._multi_task:
self._init_exe_prog(for_train=True)
return loss_var
def fit_readers(self, reader_dict):
raise NotImplementedError()
def fit_readers_with_mixratio(self, readers, sampling_reference, num_epochs, phase='train'):
"""
Bind readers and loaded train/predict data to trainers.
Args:
readers: a dict or list of Reader objects. For dict case, each key is a trainer's name, and the mapped value is the reader object to bind to the trainer. For list case, each
"""
self._check_phase(phase)
if isinstance(readers, list):
reader_dict = {k.name: v for k,v in zip(self._trainers, readers)}
elif isinstance(readers, dict):
reader_dict = readers
else:
raise ValueError()
num_heads = len(self._trainers)
assert len(reader_dict) == num_heads, "received number of readers is not consistent with trainers."
trainer_dict = {t.name: t for t in self._trainers}
assert sampling_reference in trainer_dict
trainer_dict[sampling_reference]._set_task_id(self._task_id_var)
trainer_dict[sampling_reference].fit_reader(reader_dict[sampling_reference])
base_steps_pur_epoch = trainer_dict[sampling_reference]._steps_pur_epoch
self._finish_steps = {}
self._finish = {}
input_names = []
name_to_pos = []
joint_shape_and_dtypes = []
iterators = []
prefixes = []
mrs = []
net_inputs = []
global_steps = 0
for t in self._trainers:
assert t.name in reader_dict
assert reader_dict[t.name].num_epochs is None, "{}: num_epochs is not None. \
To run with multi-head mode, num_epochs of each Trainer should be set as None.".format(t.name)
# print(num_epochs, t.mix_ratio, base_steps_pur_epoch)
max_train_steps = int(num_epochs * t.mix_ratio * base_steps_pur_epoch)
if not t._as_auxilary:
print('{}: expected train steps {}.'.format(t.name, max_train_steps))
self._finish_steps[t.name] = max_train_steps
self._finish[t.name] = False
else:
self._finish_steps[t.name] = 9999999999
self._finish[t.name] = True
global_steps += max_train_steps
if t.name != sampling_reference:
t._set_task_id(self._task_id_var)
t.fit_reader(reader_dict[t.name])
net_inputs.append(t._net_inputs)
prefixes.append(t.name)
mrs.append(t.mix_ratio)
iterators.append(t._raw_iterator_fn())
input_names.append(t._input_names)
name_to_pos.append(t._name_to_position)
joint_shape_and_dtypes.append(t._shape_and_dtypes)
print('Estimated overall train steps {}.'.format(global_steps))
self._overall_train_steps = global_steps
iterator_fn = reader_helper.create_multihead_iterator_fn(iterators, prefixes, joint_shape_and_dtypes, \
mrs, input_names, name_to_pos, dev_count=dev_count)
feed_batch_process_fn = reader_helper.create_feed_batch_process_fn(net_inputs)
if gpu_dev_count > 1:
distribute_feeder_fn = data_feeder(iterator_fn, feed_batch_process_fn)
else:
distribute_feeder_fn = iterator_fn
if phase == 'train':
self._train_reader = distribute_feeder_fn()
self._feed_batch_process_fn = feed_batch_process_fn
elif phase == 'predict':
self._predict_reader = distribute_feeder_fn()
self._pred_feed_batch_process_fn = feed_batch_process_fn
def _check_finish(self, task_name, silent=False):
trainers = {t.name:t for t in self._trainers}
if trainers[task_name]._cur_train_step == self._finish_steps[task_name]:
if not silent:
print(task_name+' train finish!')
self._finish[task_name]=True
flags = list(set(self._finish.values()))
return len(flags) == 1 and flags[0] == True
def train(self, print_steps=5):
"""
start training.
Args:
print_steps: int. Logging frequency of training message, e.g., current step, loss and speed.
"""
iterator = self._train_reader
self._distribute_train_prog = fluid.CompiledProgram(self._train_prog).with_data_parallel(loss_name=self._loss_var.name)
for t in self._trainers:
t._set_exe(self._exe)
t._set_dist_train(self._distribute_train_prog)
t._set_fetch_list(self._fetch_list)
time_begin = time.time()
for feed in iterator:
# batch, task_id = feed
rt_outputs, task_id = self.train_one_step(feed)
task_rt_outputs = {k[len(self._trainers[task_id].name+'.'):]: v for k,v in rt_outputs.items() if k.startswith(self._trainers[task_id].name+'.')}
self._trainers[task_id]._task_head.batch_postprocess(task_rt_outputs)
if print_steps > 0 and self._cur_train_step % print_steps == 0:
loss = rt_outputs[self._trainers[task_id].name+'.loss']
loss = np.mean(np.squeeze(loss)).tolist()
time_end = time.time()
time_cost = time_end - time_begin
print("global step: {}, {}: step {}/{} (epoch {}), loss: {:.3f}, speed: {:.2f} steps/s".format(
self._cur_train_step, ' '*self._name_pads[self._trainers[task_id].name]+self._trainers[task_id].name, \
(self._trainers[task_id]._cur_train_step-1) % self._trainers[task_id]._steps_pur_epoch + 1, \
self._trainers[task_id]._steps_pur_epoch, self._trainers[task_id]._cur_train_epoch, \
loss, print_steps / time_cost))
time_begin = time.time()
self._check_save()
finish = self._check_finish(self._trainers[task_id].name)
if finish:
break
# if cur_task.train_finish and cur_task.cur_train_step + cur_task.cur_train_epoch * cur_task.steps_pur_epoch == cur_task.expected_train_steps:
# print(cur_task.name+': train finished!')
# cur_task.save()
# if (save_predict or save_ckpt) and self._cur_train_step % save_steps == 0:
# if save_predict:
# self.save(save_path, suffix='pred.step'+str(self._cur_train_step))
# if save_ckpt:
# fluid.io.save_persistables(self._exe, os.path.join(save_path, 'ckpt.step'+str(self._cur_train_step)), self._train_prog)
# print('checkpoint has been saved at '+os.path.join(save_path, 'ckpt.step'+str(self._cur_train_step)))
def train_one_step(self, batch):
if dev_count > 1:
assert isinstance(batch, list)
task_id = batch[0]['__task_id'][0]
else:
assert isinstance(batch, dict)
task_id = batch['__task_id'][0]
# rt_outputs = self._trainers[task_id].train_one_step(batch, self._exe, self._distribute_train_prog, self._fetch_list)
rt_outputs = self._trainers[task_id].train_one_step(batch)
self._cur_train_step += 1
self._check_save()
return rt_outputs, task_id
# if dev_count > 1:
# # feed, mask, task_id = batch
# for f in feed:
# f['branch'] = np.array([task_id], dtype='int64')
# rt_outputs = self.exe.run(self._distribute_train_prog, feed=feed, fetch_list=self._trainers[task_id]._fetch_list)
# num_fakes = decode_fake(len(rt_outputs[0]), mask, self._trainers[task_id]._batch_size)
# for _ in range(num_fakes):
# for item in rt_outputs:
# item.pop()
# else:
# feed, task_id = batch
# feed['branch'] = np.array([task_id], dtype='int64')
# rt_outputs = self._exe.run(self._distribute_train_prog, feed=feed, fetch_list=self._trainers[task_id]._fetch_list)
def predict_one_batch(self, batch):
raise NotImplementedError()
def predict(self, output_dir=None, print_steps=1000):
raise NotImplementedError()
@property
def overall_train_steps(self):
return self._overall_train_steps
...@@ -20,22 +20,22 @@ from __future__ import print_function ...@@ -20,22 +20,22 @@ from __future__ import print_function
import numpy as np import numpy as np
import paddle.fluid as fluid import paddle.fluid as fluid
from paddlepalm.optimizer.base_optimizer import BaseOptimizer from paddlepalm.optimizer.base_optimizer import Optimizer
class Adam(BaseOptimizer): class Adam(Optimizer):
def __init__(self, loss_var, lr, lr_schedualer=None): def __init__(self, loss_var, lr, lr_schedualer=None):
BaseOptimizer.__init__(self, loss_var, lr, lr_schedualer=None) Optimizer.__init__(self, loss_var, lr, lr_schedualer=None)
self._loss = loss_var self._loss = loss_var
self._lr = lr self._lr = lr
self._lr_schedualer = lr_schedualer self._lr_schedualer = lr_schedualer
def build(self, grad_clip=None): def _build(self, grad_clip=None):
if self._lr_schedualer is not None: if self._lr_schedualer is not None:
self._lr = self._lr_schedualer.build(self._lr) self._lr = self._lr_schedualer._build(self._lr)
optimizer = fluid.optimizer.Adam(learning_rate=self._lr) optimizer = fluid.optimizer.Adam(learning_rate=self._lr)
......
class BaseOptimizer(): class Optimizer(object):
def __init__(self, loss_var, lr, lr_schedualer=None): def __init__(self, loss_var, lr, lr_schedualer=None):
self._prog = None self._prog = None
self._lr_schedualer = lr_schedualer self._lr_schedualer = lr_schedualer
def build(self, grad_clip=None): def _build(self, grad_clip=None):
pass raise NotImplementedError()
def _set_prog(self, prog): def _set_prog(self, prog, init_prog):
self._prog = prog self._prog = prog
self._init_prog = prog
if self._lr_schedualer is not None: if self._lr_schedualer is not None:
self._lr_schedualer._set_prog(prog) self._lr_schedualer._set_prog(prog)
......
from cls import ClassifyReader from cls import ClassifyReader
from match import MatchReader
from seq_label import SequenceLabelReader
from mrc import MRCReader
from mlm import MaskLMReader
...@@ -14,13 +14,15 @@ ...@@ -14,13 +14,15 @@
# limitations under the License. # limitations under the License.
"""v1.1""" """v1.1"""
from copy import copy from copy import copy
class BaseReader(object): class Reader(object):
"""interface of data manager.""" """interface of data manager."""
def __init__(self, phase='train'): def __init__(self, phase='train'):
# assert isinstance(config, dict) # assert isinstance(config, dict)
# self._config = config # self._config = config
self._phase = phase self._phase = phase
self._batch_size = None
self._num_epochs = 1
self._register = set() self._register = set()
self._registered_backbone = None self._registered_backbone = None
...@@ -40,7 +42,6 @@ class BaseReader(object): ...@@ -40,7 +42,6 @@ class BaseReader(object):
self._register.add(attr_name) self._register.add(attr_name)
def register_with(self, backbone): def register_with(self, backbone):
print(backbone)
for attr in backbone.inputs_attr: for attr in backbone.inputs_attr:
self.require_attr(attr) self.require_attr(attr)
self._registered_backbone = backbone self._registered_backbone = backbone
...@@ -117,4 +118,8 @@ class BaseReader(object): ...@@ -117,4 +118,8 @@ class BaseReader(object):
"""数据集中的样本数量,即每个epoch中iterator所生成的样本数。注意,使用滑动窗口等可能导致数据集样本数发生变化的策略时,该接口应返回runtime阶段的实际样本数。""" """数据集中的样本数量,即每个epoch中iterator所生成的样本数。注意,使用滑动窗口等可能导致数据集样本数发生变化的策略时,该接口应返回runtime阶段的实际样本数。"""
raise NotImplementedError() raise NotImplementedError()
@property
def num_epochs(self):
""""""
raise NotImplementedError()
...@@ -13,26 +13,50 @@ ...@@ -13,26 +13,50 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from paddlepalm.reader.base_reader import BaseReader from paddlepalm.reader.base_reader import Reader
from paddlepalm.reader.utils.reader4ernie import ClassifyReader as CLSReader from paddlepalm.reader.utils.reader4ernie import ClassifyReader as CLSReader
class ClassifyReader(BaseReader): class ClassifyReader(Reader):
"""
The reader completes the loading and processing of text classification dataset. Supported file format: tsv.
def __init__(self, vocab_path, max_len, tokenizer='wordpiece', \ For tsv format, training dataset file should have two header areas, i.e., `label` and `text`, and test set only requires `text` area. For example,
lang='en', seed=None, do_lower_case=False, phase='train'):
"""xxxxxx. ```
label [TAB] text
1 [TAB] Today is a good day.
0 [TAB] Such a terriable day!
1 [TAB] I feel lucky to meet you, dear.
1 [TAB] He likes sunshine and I like him :).
0 [TAB] JUST! GO! OUT!
```
Argument: CAUTIOUS: The first line of the file must be header! And areas are splited by tab (\\t).
- vocab_path: xxxx
-
"""
def __init__(self, vocab_path, max_len, tokenizer='wordpiece', \
lang='en', seed=None, do_lower_case=False, phase='train'):
"""Create a new Reader for loading and processing classification task data.
Args:
vocab_path: the vocab file path to do tokenization and token_ids generation.
max_len: The maximum length of the sequence (after word segmentation). The part exceeding max_len will be removed from right.
tokenizer: string type. The name of the used tokenizer. A tokenizer is to convert raw text into tokens. Avaliable tokenizers: wordpiece.
lang: the language of dataset. Supported language: en (English), cn (Chinese). Default is en (English).
seed: int type. The random seed to shuffle dataset. Default is None, means no use of random seed.
do_lower_case: bool type. Whether to do lowercase on English text. Default is False. This argument only works on English text.
phase: the running phase of this reader. Supported phase: train, predict. Default is train.
Return:
a Reader object for classification task.
""" """
BaseReader.__init__(self, phase) Reader.__init__(self, phase)
assert lang.lower() in ['en', 'cn', 'english', 'chinese'], "supported language: en (English), cn (Chinese)." assert lang.lower() in ['en', 'cn', 'english', 'chinese'], "supported language: en (English), cn (Chinese)."
assert phase in ['train', 'pred'], "supported phase: train, pred." assert phase in ['train', 'predict'], "supported phase: train, predict."
for_cn = lang.lower() == 'cn' or lang.lower() == 'chinese' for_cn = lang.lower() == 'cn' or lang.lower() == 'chinese'
...@@ -56,6 +80,7 @@ class ClassifyReader(BaseReader): ...@@ -56,6 +80,7 @@ class ClassifyReader(BaseReader):
@property @property
def outputs_attr(self): def outputs_attr(self):
"""The contained output items (input features) of this reader."""
attrs = {"token_ids": [[-1, -1], 'int64'], attrs = {"token_ids": [[-1, -1], 'int64'],
"position_ids": [[-1, -1], 'int64'], "position_ids": [[-1, -1], 'int64'],
"segment_ids": [[-1, -1], 'int64'], "segment_ids": [[-1, -1], 'int64'],
...@@ -66,10 +91,23 @@ class ClassifyReader(BaseReader): ...@@ -66,10 +91,23 @@ class ClassifyReader(BaseReader):
return self._get_registed_attrs(attrs) return self._get_registed_attrs(attrs)
def _load_data(self, input_file, batch_size, num_epochs=None, \ def load_data(self, input_file, batch_size, num_epochs=None, \
file_format='csv', shuffle_train=True): file_format='tsv', shuffle_train=True):
self._data_generator = self._reader.data_generator(input_file, batch_size, \ """Load classification data into reader.
num_epochs, shuffle=shuffle_train if self._phase == 'train' else False, \
Args:
input_file: the dataset file path. File format should keep consistent with `file_format` argument.
batch_size: number of examples for once yield. CAUSIOUS! If your environment exists multiple GPU devices (marked as dev_count), the batch_size should be divided by dev_count with no remainder!
num_epochs: the travelsal times of input examples. Default is None, means once for single-task learning and automatically calculated for multi-task learning. This argument only works on train phase.
file_format: the file format of input file. Supported format: tsv. Default is tsv.
shuffle_train: whether to shuffle training dataset. Default is True. This argument only works on training phase.
"""
self._batch_size = batch_size
self._num_epochs = num_epochs
self._data_generator = self._reader.data_generator( \
input_file, batch_size, num_epochs if self._phase == 'train' else 1, \
shuffle=shuffle_train if self._phase == 'train' else False, \
phase=self._phase) phase=self._phase)
def _iterator(self): def _iterator(self):
...@@ -92,4 +130,8 @@ class ClassifyReader(BaseReader): ...@@ -92,4 +130,8 @@ class ClassifyReader(BaseReader):
def num_examples(self): def num_examples(self):
return self._reader.get_num_examples(phase=self._phase) return self._reader.get_num_examples(phase=self._phase)
@property
def num_epochs(self):
return self._num_epochs
...@@ -13,87 +13,148 @@ ...@@ -13,87 +13,148 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from paddlepalm.interface import reader from paddlepalm.reader.base_reader import Reader
from paddlepalm.reader.utils.reader4ernie import ClassifyReader from paddlepalm.reader.utils.reader4ernie import ClassifyReader as CLSReader
class Reader(reader):
class MatchReader(Reader):
"""
The reader completes the loading and processing of matching-like task (e.g, query-query, question-answer, text similarity, natural language inference) dataset. Supported file format: tsv.
def __init__(self, config, phase='train', dev_count=1, print_prefix=''): For pointwise learning strategy, there should be two fields in training dataset file, i.e., `text_a`, `text_b` and `label`. For pairwise learning, there should exist three fields, i.e., `text_a`, `text_b` and `text_b_neg`. For predicting, only `text_a` and `text_b` are required.
"""
A pointwise learning case shows as follows:
```
label [TAB] text_a [TAB] text_b
1 [TAB] Today is a good day. [TAB] what a nice day!
0 [TAB] Such a terriable day! [TAB] There is a dog.
1 [TAB] I feel lucky to meet you, dear. [TAB] You are my lucky, darling.
1 [TAB] He likes sunshine and I like him :). [TAB] I like him. He like sunshine.
0 [TAB] JUST! GO! OUT! [TAB] Come in please.
```
A pairwise learning case shows as follows:
text_a [TAB] text_b [TAB] text_b_neg
Today is a good day. [TAB] what a nice day! [TAB] terriable day!
Such a terriable day! [TAB] So terriable today! [TAB] There is a dog.
I feel lucky to meet you, dear. [TAB] You are my lucky, darling. [TAB] Buy some bananas, okey?
He likes sunshine and I like him :). [TAB] I like him. He like sunshine. [TAB] He has a dog.
JUST! GO! OUT! [TAB] go out now! [TAB] Come in please.
CAUTIOUS: the HEADER is required for each dataset file! And fields (columns) should be splited by Tab (\\t).
"""
def __init__(self, vocab_path, max_len, tokenizer='wordpiece', lang='en', seed=None, \
do_lower_case=False, learning_strategy='pointwise', phase='train', dev_count=1, print_prefix=''): # 需要什么加什么
"""Create a new Reader for classification task data.
Args: Args:
phase: train, eval, pred vocab_path: the vocab file path to do tokenization and token_ids generation.
""" max_len: The maximum length of the sequence (after word segmentation). The part exceeding max_len will be removed from right.
tokenizer: string type. The name of the used tokenizer. A tokenizer is to convert raw text into tokens. Avaliable tokenizers: wordpiece.
lang: the language of dataset. Supported language: en (English), cn (Chinese). Default is en (English).
seed: int type. The random seed to shuffle dataset. Default is None, means no use of random seed.
do_lower_case: bool type. Whether to do lowercase on English text. Default is False. This argument only works on English text.
learning_strategy: string type. This only works for training phase. Available strategies: pointwise, pairwise.
phase: the running phase of this reader. Supported phase: train, predict. Default is train.
self._is_training = phase == 'train' Return:
a Reader object for matching-like task.
"""
reader = ClassifyReader(config['vocab_path'], Reader.__init__(self, phase)
max_seq_len=config['max_seq_len'],
do_lower_case=config.get('do_lower_case', True), assert lang.lower() in ['en', 'cn', 'english', 'chinese'], "supported language: en (English), cn (Chinese)."
for_cn=config.get('for_cn', False), assert phase in ['train', 'predict'], "supported phase: train, predict."
random_seed=config.get('seed', None))
self._reader = reader
self._dev_count = dev_count
self._batch_size = config['batch_size'] for_cn = lang.lower() == 'cn' or lang.lower() == 'chinese'
self._max_seq_len = config['max_seq_len']
self._register.add('token_ids')
if phase == 'train': if phase == 'train':
self._input_file = config['train_file'] if learning_strategy == 'pointwise':
self._num_epochs = None # 防止iteartor终止 self._register.add('label_ids')
self._shuffle = config.get('shuffle', True) if learning_strategy == 'pairwise':
self._shuffle_buffer = config.get('shuffle_buffer', 5000) self._register.add('token_ids_neg')
elif phase == 'eval': self._register.add('position_ids_neg')
self._input_file = config['dev_file'] self._register.add('segment_ids_neg')
self._num_epochs = 1 self._register.add('input_mask_neg')
self._shuffle = False self._register.add('task_ids_neg')
self._batch_size = config.get('pred_batch_size', self._batch_size)
elif phase == 'pred': self._is_training = phase == 'train'
self._input_file = config['pred_file'] self._learning_strategy = learning_strategy
self._num_epochs = 1
self._shuffle = False
self._batch_size = config.get('pred_batch_size', self._batch_size)
match_reader = CLSReader(vocab_path,
max_seq_len=max_len,
do_lower_case=do_lower_case,
for_cn=for_cn,
random_seed=seed,
learning_strategy = learning_strategy)
self._reader = match_reader
self._dev_count = dev_count
self._phase = phase self._phase = phase
# self._batch_size =
self._print_first_n = config.get('print_first_n', 1)
@property @property
def outputs_attr(self): def outputs_attr(self):
if self._is_training: attrs = {"token_ids": [[-1, -1], 'int64'],
return {"token_ids": [[-1, -1], 'int64'], "position_ids": [[-1, -1], 'int64'],
"position_ids": [[-1, -1], 'int64'], "segment_ids": [[-1, -1], 'int64'],
"segment_ids": [[-1, -1], 'int64'], "input_mask": [[-1, -1, 1], 'float32'],
"input_mask": [[-1, -1, 1], 'float32'], "task_ids": [[-1, -1], 'int64'],
"label_ids": [[-1], 'int64'], "label_ids": [[-1], 'int64'],
"task_ids": [[-1, -1], 'int64'] "token_ids_neg": [[-1, -1], 'int64'],
} "position_ids_neg": [[-1, -1], 'int64'],
else: "segment_ids_neg": [[-1, -1], 'int64'],
return {"token_ids": [[-1, -1], 'int64'], "input_mask_neg": [[-1, -1, 1], 'float32'],
"position_ids": [[-1, -1], 'int64'], "task_ids_neg": [[-1, -1], 'int64']
"segment_ids": [[-1, -1], 'int64'], }
"task_ids": [[-1, -1], 'int64'], return self._get_registed_attrs(attrs)
"input_mask": [[-1, -1, 1], 'float32']
}
def load_data(self, input_file, batch_size, num_epochs=None, \
file_format='tsv', shuffle_train=True):
def load_data(self): """Load matching data into reader.
self._data_generator = self._reader.data_generator(self._input_file, self._batch_size, self._num_epochs, dev_count=self._dev_count, shuffle=self._shuffle, phase=self._phase)
Args:
def iterator(self): input_file: the dataset file path. File format should keep consistent with `file_format` argument.
batch_size: number of examples for once yield. CAUSIOUS! If your environment exists multiple GPU devices (marked as dev_count), the batch_size should be divided by dev_count with no remainder!
def list_to_dict(x): num_epochs: the travelsal times of input examples. Default is None, means once for single-task learning and automatically calculated for multi-task learning. This argument only works on train phase.
names = ['token_ids', 'segment_ids', 'position_ids', 'task_ids', 'input_mask', file_format: the file format of input file. Supported format: tsv. Default is tsv.
'label_ids', 'unique_ids'] shuffle_train: whether to shuffle training dataset. Default is True. This argument only works on training phase.
outputs = {n: i for n,i in zip(names, x)}
del outputs['unique_ids'] """
if not self._is_training: self._batch_size = batch_size
del outputs['label_ids'] self._num_epochs = num_epochs
return outputs self._data_generator = self._reader.data_generator( \
input_file, batch_size, num_epochs if self._phase == 'train' else 1, \
shuffle=shuffle_train if self._phase == 'train' else False, \
phase=self._phase)
def _iterator(self):
names = ['token_ids', 'segment_ids', 'position_ids', 'task_ids', 'input_mask', 'label_ids', \
'token_ids_neg', 'segment_ids_neg', 'position_ids_neg', 'task_ids_neg', 'input_mask_neg']
if self._learning_strategy == 'pairwise':
names.remove('label_ids')
for batch in self._data_generator(): for batch in self._data_generator():
yield list_to_dict(batch) outputs = {n: i for n,i in zip(names, batch)}
ret = {}
# TODO: move runtime shape check here
for attr in self.outputs_attr.keys():
ret[attr] = outputs[attr]
yield ret
@property @property
def num_examples(self): def num_examples(self):
return self._reader.get_num_examples(phase=self._phase) return self._reader.get_num_examples(phase=self._phase)
@property
def num_epochs(self):
return self._num_epochs
...@@ -13,79 +13,79 @@ ...@@ -13,79 +13,79 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from paddlepalm.interface import reader from paddlepalm.reader.base_reader import Reader
from paddlepalm.reader.utils.reader4ernie import MaskLMReader from paddlepalm.reader.utils.reader4ernie import MaskLMReader as MLMReader
import numpy as np import numpy as np
class Reader(reader): class MaskLMReader(Reader):
def __init__(self, config, phase='train', dev_count=1, print_prefix=''): def __init__(self, vocab_path, max_len, tokenizer='wordpiece', \
lang='en', seed=None, do_lower_case=False, phase='train', dev_count=1, print_prefix=''):
""" """
Args: Args:
phase: train, eval, pred phase: train, eval, pred
""" """
self._is_training = phase == 'train'
reader = MaskLMReader(config['vocab_path'], Reader.__init__(self, phase)
max_seq_len=config['max_seq_len'],
do_lower_case=config.get('do_lower_case', False),
for_cn=config.get('for_cn', False),
random_seed=config.get('seed', None))
self._reader = reader
self._dev_count = dev_count
self._batch_size = config['batch_size'] assert lang.lower() in ['en', 'cn', 'english', 'chinese'], "supported language: en (English), cn (Chinese)."
self._max_seq_len = config['max_seq_len'] assert phase in ['train', 'predict'], "supported phase: train, predict."
for_cn = lang.lower() == 'cn' or lang.lower() == 'chinese'
self._register.add('token_ids')
self._register.add('mask_pos')
if phase == 'train': if phase == 'train':
self._input_file = config['train_file'] self._register.add('mask_label')
self._num_epochs = None # 防止iteartor终止 self._is_training = phase == 'train'
self._shuffle = config.get('shuffle', True)
self._shuffle_buffer = config.get('shuffle_buffer', 5000) mlm_reader = MLMReader(vocab_path,
elif phase == 'eval': max_seq_len=max_len,
self._input_file = config['dev_file'] do_lower_case=do_lower_case,
self._num_epochs = 1 for_cn=for_cn,
self._shuffle = False random_seed=seed)
self._batch_size = config.get('pred_batch_size', self._batch_size) self._reader = mlm_reader
elif phase == 'pred':
self._input_file = config['pred_file']
self._num_epochs = 1
self._shuffle = False
self._batch_size = config.get('pred_batch_size', self._batch_size)
self._phase = phase self._phase = phase
# self._batch_size = self._dev_count = dev_count
self._print_first_n = config.get('print_first_n', 1)
@property @property
def outputs_attr(self): def outputs_attr(self):
return {"token_ids": [[-1, -1], 'int64'], attrs = {"token_ids": [[-1, -1], 'int64'],
"position_ids": [[-1, -1], 'int64'], "position_ids": [[-1, -1], 'int64'],
"segment_ids": [[-1, -1], 'int64'], "segment_ids": [[-1, -1], 'int64'],
"input_mask": [[-1, -1, 1], 'float32'], "input_mask": [[-1, -1, 1], 'float32'],
"task_ids": [[-1, -1], 'int64'], "task_ids": [[-1, -1], 'int64'],
"mask_label": [[-1], 'int64'], "mask_label": [[-1], 'int64'],
"mask_pos": [[-1], 'int64'], "mask_pos": [[-1], 'int64']
} }
return self._get_registed_attrs(attrs)
def load_data(self):
self._data_generator = self._reader.data_generator(self._input_file, self._batch_size, self._num_epochs, dev_count=self._dev_count, shuffle=self._shuffle, phase=self._phase)
def iterator(self): def load_data(self, input_file, batch_size, num_epochs=None, \
file_format='csv', shuffle_train=True):
self._batch_size = batch_size
self._num_epochs = num_epochs
self._data_generator = self._reader.data_generator( \
input_file, batch_size, num_epochs if self._phase == 'train' else 1, \
shuffle=shuffle_train if self._phase == 'train' else False, \
phase=self._phase)
def list_to_dict(x): def _iterator(self):
names = ['token_ids', 'position_ids', 'segment_ids', 'input_mask',
'task_ids', 'mask_label', 'mask_pos']
outputs = {n: i for n,i in zip(names, x)}
# outputs['batchsize_x_seqlen'] = [self._batch_size * len(outputs['token_ids'][0]) - 1]
return outputs
names = ['token_ids', 'position_ids', 'segment_ids', 'input_mask',
'task_ids', 'mask_label', 'mask_pos']
for batch in self._data_generator(): for batch in self._data_generator():
# print(np.shape(list_to_dict(batch)['token_ids'])) outputs = {n: i for n,i in zip(names, batch)}
# print(list_to_dict(batch)['mask_label'].tolist()) ret = {}
yield list_to_dict(batch) # TODO: move runtime shape check here
for attr in self.outputs_attr.keys():
ret[attr] = outputs[attr]
yield ret
def get_epoch_outputs(self): def get_epoch_outputs(self):
return {'examples': self._reader.get_examples(self._phase), return {'examples': self._reader.get_examples(self._phase),
...@@ -95,3 +95,7 @@ class Reader(reader): ...@@ -95,3 +95,7 @@ class Reader(reader):
def num_examples(self): def num_examples(self):
return self._reader.get_num_examples(phase=self._phase) return self._reader.get_num_examples(phase=self._phase)
@property
def num_epochs(self):
return self._num_epochs
...@@ -13,77 +13,113 @@ ...@@ -13,77 +13,113 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from paddlepalm.interface import reader from paddlepalm.reader.base_reader import Reader
from paddlepalm.reader.utils.reader4ernie import MRCReader from paddlepalm.reader.utils.reader4ernie import MRCReader as MRCReader_t
import numpy as np
class Reader(reader): class MRCReader(Reader):
"""
The reader completes the loading and processing of SQuAD like machine reading comprehension dataset. Supported file format: json.
def __init__(self, config, phase='train', dev_count=1, print_prefix=''): The outermost data structure of a dataset is a dictionary, which contains the dataset version number field and data field. In the data field, each example contains the title of the article and several paragraphs. Each paragraph contains a paragraph context corresponed question-answer pairs. For each q-a pair, it contains a question with globally unique ID, as well as (several) answers. Each answer item contains the text of the answer itself and its starting position of the context. Note that the starting position is at the character level. In addition, for the test set, answers field is not necessary.
"""
A typical case is shown as follows.
{"version": "1.0",
"data": [
{"title": "...",
"paragraphs": [
{"context": "...",
"qas": [
{"question": "..."
"id": "..."
"answers": [
{"text": "...",
"answer_start": ...}
{...}
...
]
}
{...}
...
]
}
{...},
...
]
}
{...}
...
]
}
"""
def __init__(self, vocab_path, max_len, max_query_len, doc_stride, \
tokenizer='wordpiece', lang='en', seed=None, do_lower_case=False, \
remove_noanswer=True, phase='train'):
"""Create a new Reader for loading and processing machine reading comprehension task data.
Args: Args:
phase: train, eval, pred vocab_path: the vocab file path to do tokenization and token_ids generation.
""" max_len: the maximum length of the sequence (after word segmentation). The part exceeding max_len will be removed from right.
max_query_len: the maximum length of query/question (after word segmentation).
doc_stride: the slice stride of context window.
tokenizer: string type. The name of the used tokenizer. A tokenizer is to convert raw text into tokens. Avaliable tokenizers: wordpiece.
lang: the language of dataset. Supported language: en (English), cn (Chinese). Default is en (English).
seed: int type. The random seed to shuffle dataset. Default is None, means no use of random seed.
do_lower_case: bool type. Whether to do lowercase on English text. Default is False. This argument only works on English text.
remove_noanswer: bool type. Whether to remove no answer question and invalid answer.
phase: the running phase of this reader. Supported phase: train, predict. Default is train.
Return:
a Reader object for classification task.
"""
self._is_training = phase == 'train' Reader.__init__(self, phase)
reader = MRCReader(config['vocab_path'],
max_seq_len=config['max_seq_len'], assert lang.lower() in ['en', 'cn', 'english', 'chinese'], "supported language: en (English), cn (Chinese)."
do_lower_case=config.get('do_lower_case', False), assert phase in ['train', 'predict'], "supported phase: train, predict."
tokenizer='FullTokenizer',
for_cn=config.get('for_cn', False), for_cn = lang.lower() == 'cn' or lang.lower() == 'chinese'
doc_stride=config['doc_stride'],
remove_noanswer=config.get('remove_noanswer', True),
max_query_length=config['max_query_len'], self._register.add('token_ids')
random_seed=config.get('seed', None))
self._reader = reader
self._dev_count = dev_count
self._batch_size = config['batch_size']
self._max_seq_len = config['max_seq_len']
if phase == 'train': if phase == 'train':
self._input_file = config['train_file'] self._register.add("start_positions")
# self._num_epochs = config['num_epochs'] self._register.add("end_positions")
self._num_epochs = None # 防止iteartor终止 else:
self._shuffle = config.get('shuffle', True) self._register.add("unique_ids")
self._shuffle_buffer = config.get('shuffle_buffer', 5000)
if phase == 'eval':
self._input_file = config['dev_file']
self._num_epochs = 1
self._shuffle = False
self._batch_size = config.get('pred_batch_size', self._batch_size)
elif phase == 'pred':
self._input_file = config['pred_file']
self._num_epochs = 1
self._shuffle = False
self._batch_size = config.get('pred_batch_size', self._batch_size)
self._phase = phase self._is_training = phase == 'train'
# self._batch_size =
self._print_first_n = config.get('print_first_n', 1)
# TODO: without slide window version mrc_reader = MRCReader_t(vocab_path,
self._with_slide_window = config.get('with_slide_window', False) max_seq_len=max_len,
do_lower_case=do_lower_case,
tokenizer=tokenizer,
doc_stride=doc_stride,
remove_noanswer=remove_noanswer,
max_query_length=max_query_len,
for_cn=for_cn,
random_seed=seed)
self._reader = mrc_reader
self._phase = phase
@property @property
def outputs_attr(self): def outputs_attr(self):
if self._is_training: attrs = {"token_ids": [[-1, -1], 'int64'],
return {"token_ids": [[-1, -1], 'int64'], "position_ids": [[-1, -1], 'int64'],
"position_ids": [[-1, -1], 'int64'], "segment_ids": [[-1, -1], 'int64'],
"segment_ids": [[-1, -1], 'int64'], "input_mask": [[-1, -1, 1], 'float32'],
"input_mask": [[-1, -1, 1], 'float32'], "start_positions": [[-1], 'int64'],
"start_positions": [[-1], 'int64'], "end_positions": [[-1], 'int64'],
"end_positions": [[-1], 'int64'], "task_ids": [[-1, -1], 'int64'],
"task_ids": [[-1, -1], 'int64'] "unique_ids": [[-1], 'int64']
} }
else: return self._get_registed_attrs(attrs)
return {"token_ids": [[-1, -1], 'int64'],
"position_ids": [[-1, -1], 'int64'],
"segment_ids": [[-1, -1], 'int64'],
"task_ids": [[-1, -1], 'int64'],
"input_mask": [[-1, -1, 1], 'float32'],
"unique_ids": [[-1], 'int64']
}
@property @property
def epoch_outputs_attr(self): def epoch_outputs_attr(self):
...@@ -91,26 +127,44 @@ class Reader(reader): ...@@ -91,26 +127,44 @@ class Reader(reader):
return {"examples": None, return {"examples": None,
"features": None} "features": None}
def load_data(self): def load_data(self, input_file, batch_size, num_epochs=None, file_format='csv', shuffle_train=True):
self._data_generator = self._reader.data_generator(self._input_file, self._batch_size, self._num_epochs, dev_count=self._dev_count, shuffle=self._shuffle, phase=self._phase) """Load mrc data into reader.
def iterator(self):
def list_to_dict(x): Args:
names = ['token_ids', 'segment_ids', 'position_ids', 'task_ids', 'input_mask', input_file: the dataset file path. File format should keep consistent with `file_format` argument.
'start_positions', 'end_positions', 'unique_ids'] batch_size: number of examples for once yield. CAUSIOUS! If your environment exists multiple GPU devices (marked as dev_count), the batch_size should be divided by dev_count with no remainder!
outputs = {n: i for n,i in zip(names, x)} num_epochs: the travelsal times of input examples. Default is None, means once for single-task learning and automatically calculated for multi-task learning. This argument only works on train phase.
if self._is_training: file_format: the file format of input file. Supported format: tsv. Default is tsv.
del outputs['unique_ids'] shuffle_train: whether to shuffle training dataset. Default is True. This argument only works on training phase.
else:
del outputs['start_positions']
del outputs['end_positions']
return outputs
"""
self._batch_size = batch_size
self._num_epochs = num_epochs
self._data_generator = self._reader.data_generator( \
input_file, batch_size, num_epochs if self._phase == 'train' else 1, \
shuffle=shuffle_train if self._phase == 'train' else False, \
phase=self._phase)
def _iterator(self):
names = ['token_ids', 'segment_ids', 'position_ids', 'task_ids', 'input_mask',
'start_positions', 'end_positions', 'unique_ids']
if self._is_training:
names.remove('unique_ids')
for batch in self._data_generator(): for batch in self._data_generator():
yield list_to_dict(batch) outputs = {n: i for n,i in zip(names, batch)}
ret = {}
# TODO: move runtime shape check here
for attr in self.outputs_attr.keys():
ret[attr] = outputs[attr]
if not self._is_training:
assert 'unique_ids' in ret, ret
yield ret
def get_epoch_outputs(self): def get_epoch_outputs(self):
return {'examples': self._reader.get_examples(self._phase), return {'examples': self._reader.get_examples(self._phase),
'features': self._reader.get_features(self._phase)} 'features': self._reader.get_features(self._phase)}
...@@ -118,3 +172,7 @@ class Reader(reader): ...@@ -118,3 +172,7 @@ class Reader(reader):
def num_examples(self): def num_examples(self):
return self._reader.get_num_examples(phase=self._phase) return self._reader.get_num_examples(phase=self._phase)
@property
def num_epochs(self):
return self._num_epochs
# -*- coding: UTF-8 -*-
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from paddlepalm.reader.base_reader import Reader
from paddlepalm.reader.utils.reader4ernie import SequenceLabelReader as SLReader
class SequenceLabelReader(Reader):
"""
The reader completes the loading and processing of sequence labeling type task (e.g, pos tagging, named entity recognition) dataset. Supported file format: tsv.
"""
def __init__(self, vocab_path, max_len, label_map_config, tokenizer='wordpiece', \
lang='en', seed=None, do_lower_case=False, phase='train', dev_count=1, print_prefix=''):
"""
Args:
phase: train, eval, pred
lang: en, ch, ...
"""
Reader.__init__(self, phase)
assert lang.lower() in ['en', 'cn', 'english', 'chinese'], "supported language: en (English), cn (Chinese)."
assert phase in ['train', 'predict'], "supported phase: train, predict."
for_cn = lang.lower() == 'cn' or lang.lower() == 'chinese'
self._register.add('token_ids')
self._register.add('seq_lens')
if phase == 'train':
self._register.add('label_ids')
self._is_training = phase == 'train'
ner_reader = SLReader(vocab_path,
max_seq_len=max_len,
do_lower_case=do_lower_case,
for_cn=for_cn,
random_seed=seed,
label_map_config=label_map_config)
self._reader = ner_reader
self._phase = phase
self._dev_count = dev_count
@property
def outputs_attr(self):
attrs = {"token_ids": [[-1, -1], 'int64'],
"position_ids": [[-1, -1], 'int64'],
"segment_ids": [[-1, -1], 'int64'],
"task_ids": [[-1, -1], 'int64'],
"input_mask": [[-1, -1, 1], 'float32'],
"seq_lens": [[-1], 'int64'],
"label_ids": [[-1, -1], 'int64']}
return self._get_registed_attrs(attrs)
def load_data(self, input_file, batch_size, num_epochs=None, \
file_format='tsv', shuffle_train=True):
"""Load sequence labeling data into reader.
Args:
input_file: the dataset file path. File format should keep consistent with `file_format` argument.
batch_size: number of examples for once yield. CAUSIOUS! If your environment exists multiple GPU devices (marked as dev_count), the batch_size should be divided by dev_count with no remainder!
num_epochs: the travelsal times of input examples. Default is None, means once for single-task learning and automatically calculated for multi-task learning. This argument only works on train phase.
file_format: the file format of input file. Supported format: tsv. Default is tsv.
shuffle_train: whether to shuffle training dataset. Default is True. This argument only works on training phase.
"""
self._batch_size = batch_size
self._num_epochs = num_epochs
self._data_generator = self._reader.data_generator( \
input_file, batch_size, num_epochs if self._phase == 'train' else 1, \
shuffle=shuffle_train if self._phase == 'train' else False, \
phase=self._phase)
def _iterator(self):
names = ['token_ids', 'segment_ids', 'position_ids', 'task_ids', 'input_mask',
'label_ids', 'seq_lens', 'label_ids']
for batch in self._data_generator():
outputs = {n: i for n,i in zip(names, batch)}
ret = {}
# TODO: move runtime shape check here
for attr in self.outputs_attr.keys():
ret[attr] = outputs[attr]
yield ret
def get_epoch_outputs(self):
return {'examples': self._reader.get_examples(self._phase),
'features': self._reader.get_features(self._phase)}
@property
def num_examples(self):
return self._reader.get_num_examples(phase=self._phase)
@property
def num_epochs(self):
return self._num_epochs
...@@ -19,57 +19,76 @@ from __future__ import print_function ...@@ -19,57 +19,76 @@ from __future__ import print_function
import numpy as np import numpy as np
def mask(batch_tokens, total_token_num, vocab_size, CLS=1, SEP=2, MASK=3): def mask(batch_tokens, total_token_num, vocab_size, CLS=1, SEP=2, MASK=3, dev_count=1):
""" """
Add mask for batch_tokens, return out, mask_label, mask_pos; Add mask for batch_tokens, return out, mask_label, mask_pos;
Note: mask_pos responding the batch_tokens after padded; Note: mask_pos responding the batch_tokens after padded;
""" """
max_len = max([len(sent) for sent in batch_tokens]) max_len = max([len(sent) for sent in batch_tokens])
mask_label = []
mask_pos = [] multidev_batch_tokens = []
prob_mask = np.random.rand(total_token_num) multidev_mask_label = []
# Note: the first token is [CLS], so [low=1] multidev_mask_pos = []
replace_ids = np.random.randint(1, high=vocab_size, size=total_token_num)
pre_sent_len = 0 big_batch_tokens = batch_tokens
prob_index = 0 stride = len(batch_tokens) // dev_count
for sent_index, sent in enumerate(batch_tokens): if stride == 0:
mask_flag = False return None, None, None
prob_index += pre_sent_len p = stride
for token_index, token in enumerate(sent):
prob = prob_mask[prob_index + token_index] for i in range(dev_count):
if prob > 0.15: batch_tokens = big_batch_tokens[p-stride:p]
continue p += stride
elif 0.03 < prob <= 0.15: mask_label = []
# mask mask_pos = []
if token != SEP and token != CLS: prob_mask = np.random.rand(total_token_num)
# Note: the first token is [CLS], so [low=1]
replace_ids = np.random.randint(1, high=vocab_size, size=total_token_num)
pre_sent_len = 0
prob_index = 0
for sent_index, sent in enumerate(batch_tokens):
mask_flag = False
prob_index += pre_sent_len
for token_index, token in enumerate(sent):
prob = prob_mask[prob_index + token_index]
if prob > 0.15:
continue
elif 0.03 < prob <= 0.15:
# mask
if token != SEP and token != CLS:
mask_label.append(sent[token_index])
sent[token_index] = MASK
mask_flag = True
mask_pos.append(sent_index * max_len + token_index)
elif 0.015 < prob <= 0.03:
# random replace
if token != SEP and token != CLS:
mask_label.append(sent[token_index])
sent[token_index] = replace_ids[prob_index + token_index]
mask_flag = True
mask_pos.append(sent_index * max_len + token_index)
else:
# keep the original token
if token != SEP and token != CLS:
mask_label.append(sent[token_index])
mask_pos.append(sent_index * max_len + token_index)
pre_sent_len = len(sent)
# ensure at least mask one word in a sentence
while not mask_flag:
token_index = int(np.random.randint(1, high=len(sent) - 1, size=1))
if sent[token_index] != SEP and sent[token_index] != CLS:
mask_label.append(sent[token_index]) mask_label.append(sent[token_index])
sent[token_index] = MASK sent[token_index] = MASK
mask_flag = True mask_flag = True
mask_pos.append(sent_index * max_len + token_index) mask_pos.append(sent_index * max_len + token_index)
elif 0.015 < prob <= 0.03: mask_label = np.array(mask_label).astype("int64").reshape([-1])
# random replace mask_pos = np.array(mask_pos).astype("int64").reshape([-1])
if token != SEP and token != CLS:
mask_label.append(sent[token_index]) multidev_batch_tokens.extend(batch_tokens)
sent[token_index] = replace_ids[prob_index + token_index] multidev_mask_label.append(mask_label)
mask_flag = True multidev_mask_pos.append(mask_pos)
mask_pos.append(sent_index * max_len + token_index)
else: return multidev_batch_tokens, multidev_mask_label, multidev_mask_pos
# keep the original token
if token != SEP and token != CLS:
mask_label.append(sent[token_index])
mask_pos.append(sent_index * max_len + token_index)
pre_sent_len = len(sent)
# ensure at least mask one word in a sentence
while not mask_flag:
token_index = int(np.random.randint(1, high=len(sent) - 1, size=1))
if sent[token_index] != SEP and sent[token_index] != CLS:
mask_label.append(sent[token_index])
sent[token_index] = MASK
mask_flag = True
mask_pos.append(sent_index * max_len + token_index)
mask_label = np.array(mask_label).astype("int64").reshape([-1])
mask_pos = np.array(mask_pos).astype("int64").reshape([-1])
return batch_tokens, mask_label, mask_pos
def prepare_batch_data(insts, def prepare_batch_data(insts,
...@@ -83,7 +102,8 @@ def prepare_batch_data(insts, ...@@ -83,7 +102,8 @@ def prepare_batch_data(insts,
task_id=0, task_id=0,
return_input_mask=True, return_input_mask=True,
return_max_len=True, return_max_len=True,
return_num_token=False): return_num_token=False,
dev_count=1):
""" """
1. generate Tensor of data 1. generate Tensor of data
2. generate Tensor of position 2. generate Tensor of position
...@@ -101,7 +121,8 @@ def prepare_batch_data(insts, ...@@ -101,7 +121,8 @@ def prepare_batch_data(insts,
vocab_size=voc_size, vocab_size=voc_size,
CLS=cls_id, CLS=cls_id,
SEP=sep_id, SEP=sep_id,
MASK=mask_id) MASK=mask_id,
dev_count=dev_count)
# Second step: padding # Second step: padding
src_id, self_input_mask = pad_batch_data( src_id, self_input_mask = pad_batch_data(
out, out,
...@@ -125,7 +146,7 @@ def prepare_batch_data(insts, ...@@ -125,7 +146,7 @@ def prepare_batch_data(insts,
return_list = [ return_list = [
src_id, pos_id, sent_id, self_input_mask, task_ids, mask_label, mask_pos src_id, pos_id, sent_id, self_input_mask, task_ids, mask_label, mask_pos
] ]
return return_list if len(return_list) > 1 else return_list[0] return return_list
def pad_batch_data(insts, def pad_batch_data(insts,
......
...@@ -29,6 +29,7 @@ import six ...@@ -29,6 +29,7 @@ import six
from io import open from io import open
from collections import namedtuple from collections import namedtuple
import paddlepalm as palm
import paddlepalm.tokenizer.ernie_tokenizer as tokenization import paddlepalm.tokenizer.ernie_tokenizer as tokenization
from paddlepalm.reader.utils.batching4ernie import pad_batch_data from paddlepalm.reader.utils.batching4ernie import pad_batch_data
from paddlepalm.reader.utils.mlm_batching import prepare_batch_data from paddlepalm.reader.utils.mlm_batching import prepare_batch_data
...@@ -41,6 +42,12 @@ if six.PY3: ...@@ -41,6 +42,12 @@ if six.PY3:
sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8') sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8')
sys.stderr = io.TextIOWrapper(sys.stderr.buffer, encoding='utf-8') sys.stderr = io.TextIOWrapper(sys.stderr.buffer, encoding='utf-8')
if sys.version[0] == '2':
reload(sys)
sys.setdefaultencoding('utf-8')
else:
import importlib
importlib.reload(sys)
def csv_reader(fd, delimiter='\t'): def csv_reader(fd, delimiter='\t'):
def gen(): def gen():
...@@ -49,20 +56,23 @@ def csv_reader(fd, delimiter='\t'): ...@@ -49,20 +56,23 @@ def csv_reader(fd, delimiter='\t'):
return gen() return gen()
class BaseReader(object): class Reader(object):
def __init__(self, def __init__(self,
vocab_path, vocab_path,
label_map_config=None, label_map_config=None,
max_seq_len=512, max_seq_len=512,
do_lower_case=False, do_lower_case=True,
in_tokens=False, in_tokens=False,
is_inference=False, is_inference=False,
learning_strategy='pointwise',
random_seed=None, random_seed=None,
tokenizer="FullTokenizer", tokenizer="FullTokenizer",
phase='train',
is_classify=True, is_classify=True,
is_regression=False, is_regression=False,
for_cn=False, for_cn=True,
task_id=0): task_id=0):
assert phase in ['train', 'predict'], "supported phase: train, predict."
self.max_seq_len = max_seq_len self.max_seq_len = max_seq_len
self.tokenizer = tokenization.FullTokenizer( self.tokenizer = tokenization.FullTokenizer(
vocab_file=vocab_path, do_lower_case=do_lower_case) vocab_file=vocab_path, do_lower_case=do_lower_case)
...@@ -72,7 +82,9 @@ class BaseReader(object): ...@@ -72,7 +82,9 @@ class BaseReader(object):
self.sep_id = self.vocab["[SEP]"] self.sep_id = self.vocab["[SEP]"]
self.mask_id = self.vocab["[MASK]"] self.mask_id = self.vocab["[MASK]"]
self.in_tokens = in_tokens self.in_tokens = in_tokens
self.phase = phase
self.is_inference = is_inference self.is_inference = is_inference
self.learning_strategy = learning_strategy
self.for_cn = for_cn self.for_cn = for_cn
self.task_id = task_id self.task_id = task_id
...@@ -83,7 +95,6 @@ class BaseReader(object): ...@@ -83,7 +95,6 @@ class BaseReader(object):
self.current_example = 0 self.current_example = 0
self.current_epoch = 0 self.current_epoch = 0
self.num_examples = 0 self.num_examples = 0
self.examples = {} self.examples = {}
if label_map_config: if label_map_config:
...@@ -124,6 +135,7 @@ class BaseReader(object): ...@@ -124,6 +135,7 @@ class BaseReader(object):
tokens_a.pop() tokens_a.pop()
else: else:
tokens_b.pop() tokens_b.pop()
def _convert_example_to_record(self, example, max_seq_length, tokenizer): def _convert_example_to_record(self, example, max_seq_length, tokenizer):
"""Converts a single `Example` into a single `Record`.""" """Converts a single `Example` into a single `Record`."""
...@@ -131,26 +143,33 @@ class BaseReader(object): ...@@ -131,26 +143,33 @@ class BaseReader(object):
text_a = tokenization.convert_to_unicode(example.text_a) text_a = tokenization.convert_to_unicode(example.text_a)
tokens_a = tokenizer.tokenize(text_a) tokens_a = tokenizer.tokenize(text_a)
tokens_b = None tokens_b = None
has_text_b = False has_text_b = False
has_text_b_neg = False
if isinstance(example, dict): if isinstance(example, dict):
has_text_b = "text_b" in example.keys() has_text_b = "text_b" in example.keys()
has_text_b_neg = "text_b_neg" in example.keys()
else: else:
has_text_b = "text_b" in example._fields has_text_b = "text_b" in example._fields
has_text_b_neg = "text_b_neg" in example._fields
if has_text_b: if has_text_b:
text_b = tokenization.convert_to_unicode(example.text_b) text_b = tokenization.convert_to_unicode(example.text_b)
tokens_b = tokenizer.tokenize(text_b) tokens_b = tokenizer.tokenize(text_b)
if tokens_b:
# Modifies `tokens_a` and `tokens_b` in place so that the total # Modifies `tokens_a` and `tokens_b` in place so that the total
# length is less than the specified length. # length is less than the specified length.
# Account for [CLS], [SEP], [SEP] with "- 3" # Account for [CLS], [SEP], [SEP] with "- 3"
self._truncate_seq_pair(tokens_a, tokens_b, max_seq_length - 3) self._truncate_seq_pair(tokens_a, tokens_b, max_seq_length - 3)
if has_text_b_neg and self.phase == 'train':
tokens_a_neg = tokenizer.tokenize(text_a)
text_b_neg = tokenization.convert_to_unicode(example.text_b_neg)
tokens_b_neg = tokenizer.tokenize(text_b_neg)
self._truncate_seq_pair(tokens_a_neg, tokens_b_neg, max_seq_length - 3)
else: else:
# Account for [CLS] and [SEP] with "- 2" # Account for [CLS] and [SEP] with "- 2"
if len(tokens_a) > max_seq_length - 2: if len(tokens_a) > max_seq_length - 2:
tokens_a = tokens_a[0:(max_seq_length - 2)] tokens_a = tokens_a[0:(max_seq_length - 2)]
# The convention in BERT/ERNIE is: # The convention in BERT/ERNIE is:
# (a) For sequence pairs: # (a) For sequence pairs:
...@@ -173,6 +192,7 @@ class BaseReader(object): ...@@ -173,6 +192,7 @@ class BaseReader(object):
tokens = [] tokens = []
text_type_ids = [] text_type_ids = []
tokens.append("[CLS]") tokens.append("[CLS]")
text_type_ids.append(0) text_type_ids.append(0)
for token in tokens_a: for token in tokens_a:
tokens.append(token) tokens.append(token)
...@@ -190,6 +210,29 @@ class BaseReader(object): ...@@ -190,6 +210,29 @@ class BaseReader(object):
token_ids = tokenizer.convert_tokens_to_ids(tokens) token_ids = tokenizer.convert_tokens_to_ids(tokens)
position_ids = list(range(len(token_ids))) position_ids = list(range(len(token_ids)))
if has_text_b_neg and self.phase == 'train':
tokens_neg = []
text_type_ids_neg = []
tokens_neg.append("[CLS]")
text_type_ids_neg.append(0)
for token in tokens_a_neg:
tokens_neg.append(token)
text_type_ids_neg.append(0)
tokens_neg.append("[SEP]")
text_type_ids_neg.append(0)
if tokens_b_neg:
for token in tokens_b_neg:
tokens_neg.append(token)
text_type_ids_neg.append(1)
tokens_neg.append("[SEP]")
text_type_ids_neg.append(1)
token_ids_neg = tokenizer.convert_tokens_to_ids(tokens_neg)
position_ids_neg = list(range(len(token_ids_neg)))
if self.is_inference: if self.is_inference:
Record = namedtuple('Record', Record = namedtuple('Record',
['token_ids', 'text_type_ids', 'position_ids']) ['token_ids', 'text_type_ids', 'position_ids'])
...@@ -198,28 +241,41 @@ class BaseReader(object): ...@@ -198,28 +241,41 @@ class BaseReader(object):
text_type_ids=text_type_ids, text_type_ids=text_type_ids,
position_ids=position_ids) position_ids=position_ids)
else: else:
if self.label_map:
label_id = self.label_map[example.label]
else:
label_id = example.label
Record = namedtuple('Record', [
'token_ids', 'text_type_ids', 'position_ids', 'label_id', 'qid'
])
qid = None qid = None
if "qid" in example._fields: if "qid" in example._fields:
qid = example.qid qid = example.qid
if self.learning_strategy == 'pairwise' and self.phase == 'train':
Record = namedtuple('Record',
['token_ids', 'text_type_ids', 'position_ids', 'token_ids_neg', 'text_type_ids_neg', 'position_ids_neg', 'qid'])
record = Record(
token_ids=token_ids,
text_type_ids=text_type_ids,
position_ids=position_ids,
token_ids_neg=token_ids_neg,
text_type_ids_neg=text_type_ids_neg,
position_ids_neg=position_ids_neg,
qid=qid)
else:
if self.label_map:
label_id = self.label_map[example.label]
else:
label_id = example.label
record = Record( Record = namedtuple('Record', [
token_ids=token_ids, 'token_ids', 'text_type_ids', 'position_ids', 'label_id', 'qid'
text_type_ids=text_type_ids, ])
position_ids=position_ids,
label_id=label_id, record = Record(
qid=qid) token_ids=token_ids,
text_type_ids=text_type_ids,
position_ids=position_ids,
label_id=label_id,
qid=qid)
return record return record
def _prepare_batch_data(self, examples, batch_size, phase=None): def _prepare_batch_data(self, examples, batch_size, phase='train'):
"""generate batch records""" """generate batch records"""
batch_records, max_len = [], 0 batch_records, max_len = [], 0
if len(examples) < batch_size: if len(examples) < batch_size:
...@@ -228,7 +284,7 @@ class BaseReader(object): ...@@ -228,7 +284,7 @@ class BaseReader(object):
if phase == "train": if phase == "train":
self.current_example = index self.current_example = index
record = self._convert_example_to_record(example, self.max_seq_len, record = self._convert_example_to_record(example, self.max_seq_len,
self.tokenizer) self.tokenizer)
max_len = max(max_len, len(record.token_ids)) max_len = max(max_len, len(record.token_ids))
if self.in_tokens: if self.in_tokens:
to_append = (len(batch_records) + 1) * max_len <= batch_size to_append = (len(batch_records) + 1) * max_len <= batch_size
...@@ -240,16 +296,14 @@ class BaseReader(object): ...@@ -240,16 +296,14 @@ class BaseReader(object):
yield self._pad_batch_records(batch_records) yield self._pad_batch_records(batch_records)
batch_records, max_len = [record], len(record.token_ids) batch_records, max_len = [record], len(record.token_ids)
if phase == 'pred' and batch_records: if phase == 'predict' and batch_records:
yield self._pad_batch_records(batch_records) yield self._pad_batch_records(batch_records)
def get_num_examples(self, input_file=None, phase=None): def get_num_examples(self, input_file=None, phase='train'):
if self.examples is not None: if input_file is None:
if phase is None: return len(self.examples.get(phase, []))
phase = 'all'
return len(self.examples[phase])
else: else:
assert input_file is not None, "Argument input_file should be given or the data_generator should be created when this func is called." # assert input_file is not None, "Argument input_file should be given or the data_generator should be created when this func is called."
examples = self._read_tsv(input_file) examples = self._read_tsv(input_file)
return len(examples) return len(examples)
...@@ -285,87 +339,16 @@ class BaseReader(object): ...@@ -285,87 +339,16 @@ class BaseReader(object):
if len(all_dev_batches) == dev_count: if len(all_dev_batches) == dev_count:
for batch in all_dev_batches: for batch in all_dev_batches:
yield batch yield batch
all_dev_batches = [] all_dev_batches = []
def f(): def f():
for i in wrapper(): for i in wrapper():
yield i yield i
# def f():
# try:
# for i in wrapper():
# yield i
# except Exception as e:
# import traceback
# traceback.print_exc()
return f return f
# return wrapper
class ClassifyReader(BaseReader): class MaskLMReader(Reader):
def _read_tsv(self, input_file, quotechar=None):
"""Reads a tab separated value file."""
with open(input_file, 'r', encoding='utf8') as f:
reader = csv_reader(f)
headers = next(reader)
text_indices = [
index for index, h in enumerate(headers) if h != "label"
]
Example = namedtuple('Example', headers)
examples = []
for line in reader:
for index, text in enumerate(line):
if index in text_indices:
if self.for_cn:
line[index] = text.replace(' ', '')
else:
line[index] = text
example = Example(*line)
examples.append(example)
return examples
def _pad_batch_records(self, batch_records):
batch_token_ids = [record.token_ids for record in batch_records]
batch_text_type_ids = [record.text_type_ids for record in batch_records]
batch_position_ids = [record.position_ids for record in batch_records]
if not self.is_inference:
batch_labels = [record.label_id for record in batch_records]
if self.is_classify:
batch_labels = np.array(batch_labels).astype("int64").reshape(
[-1])
elif self.is_regression:
batch_labels = np.array(batch_labels).astype("float32").reshape(
[-1])
if batch_records[0].qid:
batch_qids = [record.qid for record in batch_records]
batch_qids = np.array(batch_qids).astype("int64").reshape(
[-1])
else:
batch_qids = np.array([]).astype("int64").reshape([-1])
# padding
padded_token_ids, input_mask = pad_batch_data(
batch_token_ids, pad_idx=self.pad_id, return_input_mask=True)
padded_text_type_ids = pad_batch_data(
batch_text_type_ids, pad_idx=self.pad_id)
padded_position_ids = pad_batch_data(
batch_position_ids, pad_idx=self.pad_id)
padded_task_ids = np.ones_like(
padded_token_ids, dtype="int64") * self.task_id
return_list = [
padded_token_ids, padded_text_type_ids, padded_position_ids,
padded_task_ids, input_mask
]
if not self.is_inference:
return_list += [batch_labels, batch_qids]
return return_list
class MaskLMReader(BaseReader):
def _convert_example_to_record(self, example, max_seq_length, tokenizer): def _convert_example_to_record(self, example, max_seq_length, tokenizer):
"""Converts a single `Example` into a single `Record`.""" """Converts a single `Example` into a single `Record`."""
...@@ -432,13 +415,6 @@ class MaskLMReader(BaseReader): ...@@ -432,13 +415,6 @@ class MaskLMReader(BaseReader):
token_ids = tokenizer.convert_tokens_to_ids(tokens) token_ids = tokenizer.convert_tokens_to_ids(tokens)
position_ids = list(range(len(token_ids))) position_ids = list(range(len(token_ids)))
# Record = namedtuple('Record',
# ['token_ids', 'text_type_ids', 'position_ids'])
# record = Record(
# token_ids=token_ids,
# text_type_ids=text_type_ids,
# position_ids=position_ids)
return [token_ids, text_type_ids, position_ids] return [token_ids, text_type_ids, position_ids]
def batch_reader(self, examples, batch_size, in_tokens, phase): def batch_reader(self, examples, batch_size, in_tokens, phase):
...@@ -457,7 +433,7 @@ class MaskLMReader(BaseReader): ...@@ -457,7 +433,7 @@ class MaskLMReader(BaseReader):
batch = [parsed_line] batch = [parsed_line]
total_token_num = len(parsed_line[0]) total_token_num = len(parsed_line[0])
if len(batch) > 0 and phase == 'pred': if len(batch) > 0 and phase == 'predict':
yield batch, total_token_num yield batch, total_token_num
def data_generator(self, def data_generator(self,
...@@ -499,19 +475,103 @@ class MaskLMReader(BaseReader): ...@@ -499,19 +475,103 @@ class MaskLMReader(BaseReader):
# max_len=self.max_seq_len, # 注意,如果padding到最大长度,会导致mask_pos与实际位置不对应。因为mask pos是基于batch内最大长度来计算的。 # max_len=self.max_seq_len, # 注意,如果padding到最大长度,会导致mask_pos与实际位置不对应。因为mask pos是基于batch内最大长度来计算的。
return_input_mask=True, return_input_mask=True,
return_max_len=False, return_max_len=False,
return_num_token=False) return_num_token=False,
dev_count=dev_count)
if len(all_dev_batches) < dev_count: # yield batch
all_dev_batches.append(batch_data) for piece in palm.distribute.yield_pieces(batch_data, ['s', 's', 's', 's', 's', 'u', 'u'], batch_size):
if len(all_dev_batches) == dev_count: yield piece
for batch in all_dev_batches: # # ds = ['s'] * len(batch_data)
yield batch # for piece in palm.distribute.yield_pieces(batch_data, ['s'] * 7, batch_size):
all_dev_batches = [] # yield piece
return wrapper return wrapper
class SequenceLabelReader(BaseReader): class ClassifyReader(Reader):
def _read_tsv(self, input_file, quotechar=None):
"""Reads a tab separated value file."""
with open(input_file, 'r', encoding='utf8') as f:
reader = csv_reader(f)
headers = next(reader)
text_indices = [
index for index, h in enumerate(headers) if h != "label"
]
Example = namedtuple('Example', headers)
examples = []
for line in reader:
for index, text in enumerate(line):
if index in text_indices:
if self.for_cn:
line[index] = text.replace(' ', '')
else:
line[index] = text
example = Example(*line)
examples.append(example)
return examples
def _pad_batch_records(self, batch_records):
batch_token_ids = [record.token_ids for record in batch_records]
batch_text_type_ids = [record.text_type_ids for record in batch_records]
batch_position_ids = [record.position_ids for record in batch_records]
if self.phase=='train' and self.learning_strategy == 'pairwise':
batch_token_ids_neg = [record.token_ids_neg for record in batch_records]
batch_text_type_ids_neg = [record.text_type_ids_neg for record in batch_records]
batch_position_ids_neg = [record.position_ids_neg for record in batch_records]
if not self.is_inference:
if not self.learning_strategy == 'pairwise':
batch_labels = [record.label_id for record in batch_records]
if self.is_classify:
batch_labels = np.array(batch_labels).astype("int64").reshape(
[-1])
elif self.is_regression:
batch_labels = np.array(batch_labels).astype("float32").reshape(
[-1])
if batch_records[0].qid:
batch_qids = [record.qid for record in batch_records]
batch_qids = np.array(batch_qids).astype("int64").reshape(
[-1])
else:
batch_qids = np.array([]).astype("int64").reshape([-1])
# padding
padded_token_ids, input_mask = pad_batch_data(
batch_token_ids, pad_idx=self.pad_id, return_input_mask=True)
padded_text_type_ids = pad_batch_data(
batch_text_type_ids, pad_idx=self.pad_id)
padded_position_ids = pad_batch_data(
batch_position_ids, pad_idx=self.pad_id)
padded_task_ids = np.ones_like(
padded_token_ids, dtype="int64") * self.task_id
return_list = [
padded_token_ids, padded_text_type_ids, padded_position_ids,
padded_task_ids, input_mask
]
if self.phase=='train':
if self.learning_strategy == 'pairwise':
padded_token_ids_neg, input_mask_neg = pad_batch_data(
batch_token_ids_neg, pad_idx=self.pad_id, return_input_mask=True)
padded_text_type_ids_neg = pad_batch_data(
batch_text_type_ids_neg, pad_idx=self.pad_id)
padded_position_ids_neg = pad_batch_data(
batch_position_ids_neg, pad_idx=self.pad_id)
padded_task_ids_neg = np.ones_like(
padded_token_ids_neg, dtype="int64") * self.task_id
return_list += [padded_token_ids_neg, padded_text_type_ids_neg, \
padded_position_ids_neg, padded_task_ids_neg, input_mask_neg]
elif self.learning_strategy == 'pointwise':
return_list += [batch_labels]
return return_list
class SequenceLabelReader(Reader):
def _pad_batch_records(self, batch_records): def _pad_batch_records(self, batch_records):
batch_token_ids = [record.token_ids for record in batch_records] batch_token_ids = [record.token_ids for record in batch_records]
batch_text_type_ids = [record.text_type_ids for record in batch_records] batch_text_type_ids = [record.text_type_ids for record in batch_records]
...@@ -552,19 +612,7 @@ class SequenceLabelReader(BaseReader): ...@@ -552,19 +612,7 @@ class SequenceLabelReader(BaseReader):
ret_labels.append(label) ret_labels.append(label)
continue continue
if label == "O" or label.startswith("I-"): ret_labels.extend([label] * len(sub_token))
ret_labels.extend([label] * len(sub_token))
elif label.startswith("B-"):
i_label = "I-" + label[2:]
ret_labels.extend([label] + [i_label] * (len(sub_token) - 1))
elif label.startswith("S-"):
b_laebl = "B-" + label[2:]
e_label = "E-" + label[2:]
i_label = "I-" + label[2:]
ret_labels.extend([b_laebl] + [i_label] * (len(sub_token) - 2) + [e_label])
elif label.startswith("E-"):
i_label = "I-" + label[2:]
ret_labels.extend([i_label] * (len(sub_token) - 1) + [label])
assert len(ret_tokens) == len(ret_labels) assert len(ret_tokens) == len(ret_labels)
return ret_tokens, ret_labels return ret_tokens, ret_labels
...@@ -583,6 +631,9 @@ class SequenceLabelReader(BaseReader): ...@@ -583,6 +631,9 @@ class SequenceLabelReader(BaseReader):
position_ids = list(range(len(token_ids))) position_ids = list(range(len(token_ids)))
text_type_ids = [0] * len(token_ids) text_type_ids = [0] * len(token_ids)
no_entity_id = len(self.label_map) - 1 no_entity_id = len(self.label_map) - 1
labels = [
label if label in self.label_map else u"O" for label in labels
]
label_ids = [no_entity_id] + [ label_ids = [no_entity_id] + [
self.label_map[label] for label in labels self.label_map[label] for label in labels
] + [no_entity_id] ] + [no_entity_id]
...@@ -598,7 +649,7 @@ class SequenceLabelReader(BaseReader): ...@@ -598,7 +649,7 @@ class SequenceLabelReader(BaseReader):
return record return record
class ExtractEmbeddingReader(BaseReader): class ExtractEmbeddingReader(Reader):
def _pad_batch_records(self, batch_records): def _pad_batch_records(self, batch_records):
batch_token_ids = [record.token_ids for record in batch_records] batch_token_ids = [record.token_ids for record in batch_records]
batch_text_type_ids = [record.text_type_ids for record in batch_records] batch_text_type_ids = [record.text_type_ids for record in batch_records]
...@@ -625,7 +676,7 @@ class ExtractEmbeddingReader(BaseReader): ...@@ -625,7 +676,7 @@ class ExtractEmbeddingReader(BaseReader):
return return_list return return_list
class MRCReader(BaseReader): class MRCReader(Reader):
def __init__(self, def __init__(self,
vocab_path, vocab_path,
label_map_config=None, label_map_config=None,
...@@ -675,7 +726,8 @@ class MRCReader(BaseReader): ...@@ -675,7 +726,8 @@ class MRCReader(BaseReader):
def _read_json(self, input_file, is_training): def _read_json(self, input_file, is_training):
examples = [] examples = []
with open(input_file, "r", encoding='utf8') as f: with open(input_file, "r", encoding='utf-8') as f:
# f = f.read().decode(encoding='gbk').encode(encoding='utf-8')
input_data = json.load(f)["data"] input_data = json.load(f)["data"]
for entry in input_data: for entry in input_data:
for paragraph in entry["paragraphs"]: for paragraph in entry["paragraphs"]:
...@@ -806,7 +858,7 @@ class MRCReader(BaseReader): ...@@ -806,7 +858,7 @@ class MRCReader(BaseReader):
if start_offset + length == len(all_doc_tokens): if start_offset + length == len(all_doc_tokens):
break break
start_offset += min(length, self.doc_stride) start_offset += min(length, self.doc_stride)
for (doc_span_index, doc_span) in enumerate(doc_spans): for (doc_span_index, doc_span) in enumerate(doc_spans):
tokens = [] tokens = []
token_to_orig_map = {} token_to_orig_map = {}
...@@ -890,11 +942,20 @@ class MRCReader(BaseReader): ...@@ -890,11 +942,20 @@ class MRCReader(BaseReader):
if to_append: if to_append:
batch_records.append(record) batch_records.append(record)
else: else:
yield self._pad_batch_records(batch_records, phase == "train") # yield self._pad_batch_records(batch_records, phase == "train")
ds = ['s'] * 8
for piece in palm.distribute.yield_pieces(\
self._pad_batch_records(batch_records, phase == 'train'),
ds, batch_size):
yield piece
batch_records, max_len = [record], len(record.token_ids) batch_records, max_len = [record], len(record.token_ids)
if phase == 'predict' and batch_records:
for piece in palm.distribute.yield_pieces(\
self._pad_batch_records(batch_records, phase == 'train'),
ds, batch_size):
yield piece
if phase == 'pred' and batch_records:
yield self._pad_batch_records(batch_records, phase == "train")
def _pad_batch_records(self, batch_records, is_training): def _pad_batch_records(self, batch_records, is_training):
batch_token_ids = [record.token_ids for record in batch_records] batch_token_ids = [record.token_ids for record in batch_records]
...@@ -978,15 +1039,11 @@ class MRCReader(BaseReader): ...@@ -978,15 +1039,11 @@ class MRCReader(BaseReader):
self.current_epoch = epoch_index self.current_epoch = epoch_index
if phase == "train" and shuffle: if phase == "train" and shuffle:
np.random.shuffle(features) np.random.shuffle(features)
for batch_data in self._prepare_batch_data( for batch_data in self._prepare_batch_data(
features, batch_size, phase=phase): features, batch_size, phase=phase):
if len(all_dev_batches) < dev_count:
all_dev_batches.append(batch_data) yield batch_data
if len(all_dev_batches) == dev_count:
for batch in all_dev_batches:
yield batch
all_dev_batches = []
return wrapper return wrapper
......
# -*- coding: UTF-8 -*-
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from paddlepalm.interface import reader as base_reader
from paddlepalm.interface import task_paradigm as base_paradigm
import os
import json
from paddle import fluid
import importlib
from paddlepalm.default_settings import *
def check_req_args(conf, name):
assert 'reader' in conf, name+': reader is required to build TaskInstance.'
assert 'paradigm' in conf, name+': paradigm is required to build TaskInstance.'
assert 'train_file' in conf or 'pred_file' in conf, name+': at least train_file or pred_file should be provided to build TaskInstance.'
class TaskInstance(object):
def __init__(self, name, id, config, verbose=True):
self._name = name
self._config = config
self._verbose = verbose
check_req_args(config, name)
# parse Reader and Paradigm
reader_name = config['reader']
reader_mod = importlib.import_module(READER_DIR + '.' + reader_name)
Reader = getattr(reader_mod, 'Reader')
parad_name = config['paradigm']
parad_mod = importlib.import_module(PARADIGM_DIR + '.' + parad_name)
Paradigm = getattr(parad_mod, 'TaskParadigm')
self._Reader = Reader
self._Paradigm = Paradigm
self._save_infermodel_path = os.path.join(self._config['save_path'], self._name, 'infer_model')
self._save_ckpt_path = os.path.join(self._config['save_path'], 'ckpt')
self._save_infermodel_every_n_steps = config.get('save_infermodel_every_n_steps', -1)
# following flags can be fetch from instance config file
self._is_target = config.get('is_target', True)
self._first_target = config.get('is_first_target', False)
self._task_reuse_scope = config.get('task_reuse_scope', name)
self._feeded_var_names = None
self._target_vars = None
# training process management
self._mix_ratio = None
self._expected_train_steps = None
self._expected_train_epochs = None
self._steps_pur_epoch = None
self._cur_train_epoch = 0
self._cur_train_step = 0
self._train_finish = False
# 存放不同运行阶段(train,eval,pred)的数据集reader,key为phase,value为Reader实例
self._reader = {'train': None, 'eval': None, 'pred': None}
self._input_layer = None
self._inputname_to_varname = {}
self._task_layer = {'train': None, 'eval': None, 'pred': None}
self._pred_input_name_list = []
self._pred_input_varname_list = []
self._pred_fetch_name_list = []
self._pred_fetch_var_list = []
self._exe = fluid.Executor(fluid.CPUPlace())
self._save_protocol = {
'input_names': 'self._pred_input_name_list',
'input_varnames': 'self._pred_input_varname_list',
'fetch_list': 'self._pred_fetch_name_list'}
def build_task_layer(self, net_inputs, phase, scope=""):
output_vars = self._task_layer[phase].build(net_inputs, scope_name=scope)
if phase == 'pred':
if output_vars is not None:
self._pred_fetch_name_list, self._pred_fetch_var_list = zip(*output_vars.items())
else:
self._pred_fetch_name_list = []
self._pred_fetch_var_list = []
return output_vars
def postprocess(self, rt_outputs, phase):
return self._task_layer[phase].postprocess(rt_outputs)
def epoch_postprocess(self, epoch_inputs, phase):
return self._task_layer[phase].epoch_postprocess(epoch_inputs)
def save(self, suffix=''):
dirpath = self._save_infermodel_path + suffix
self._pred_input_varname_list = [str(i) for i in self._pred_input_varname_list]
# fluid.io.save_inference_model(dirpath, self._pred_input_varname_list, self._pred_fetch_var_list, self._exe, export_for_deployment = True)
prog = fluid.default_main_program().clone()
fluid.io.save_inference_model(dirpath, self._pred_input_varname_list, self._pred_fetch_var_list, self._exe, prog)
conf = {}
for k, strv in self._save_protocol.items():
d = None
v = locals()
exec('d={}'.format(strv), globals(), v)
conf[k] = v['d']
with open(os.path.join(dirpath, '__conf__'), 'w') as writer:
writer.write(json.dumps(conf, indent=1))
print(self._name + ': inference model saved at ' + dirpath)
def load(self, infer_model_path=None):
if infer_model_path is None:
infer_model_path = self._save_infermodel_path
for k,v in json.load(open(os.path.join(infer_model_path, '__conf__'))).items():
strv = self._save_protocol[k]
exec('{}=v'.format(strv))
pred_prog, self._pred_input_varname_list, self._pred_fetch_var_list = \
fluid.io.load_inference_model(infer_model_path, self._exe)
print(self._name+': inference model loaded from ' + infer_model_path)
return pred_prog
@property
def name(self):
return self._name
@property
def Reader(self):
return self._Reader
# @Reader.setter
# def Reader(self, cls):
# assert base_reader.__name__ == cls.__bases__[-1].__name__, \
# "expect: {}, receive: {}.".format(base_reader.__name__, \
# cls.__bases__[-1].__name__)
# self._Reader = cls
@property
def Paradigm(self):
return self._Paradigm
# @Paradigm.setter
# def Paradigm(self, cls):
# assert base_paradigm.__name__ == cls.__bases__[-1].__name__, \
# "expect: {}, receive: {}.".format(base_paradigm.__name__, \
# cls.__bases__[-1].__name__)
# self._Paradigm = cls
@property
def config(self):
return self._config
@property
def reader(self):
return self._reader
@property
def pred_input(self):
return zip(*[self._pred_input_name_list, self._pred_input_varname_list])
@pred_input.setter
def pred_input(self, val):
assert isinstance(val, dict)
self._pred_input_name_list, self._pred_input_varname_list = \
zip(*[[k, v.name] for k,v in val.items()])
@property
def pred_fetch_list(self):
return [self._pred_fetch_name_list, self._pred_fetch_var_list]
@property
def task_layer(self):
return self._task_layer
@property
def is_first_target(self):
return self._is_first_target
@is_first_target.setter
def is_first_target(self, value):
self._is_first_target = bool(value)
if self._is_first_target:
assert self._is_target, "ERROR: only target task could be set as main task."
if self._verbose and self._is_first_target:
print("{}: set as main task".format(self._name))
@property
def is_target(self):
if self._is_target is not None:
return self._is_target
else:
raise ValueError("{}: is_target is None".format(self._name))
@is_target.setter
def is_target(self, value):
self._is_target = bool(value)
if self._verbose:
if self._is_target:
print('{}: set as target task.'.format(self._name))
else:
print('{}: set as aux task.'.format(self._name))
@property
def mix_ratio(self):
if self._mix_ratio is not None:
return self._mix_ratio
else:
raise ValueError("{}: mix_ratio is None".format(self._name))
@mix_ratio.setter
def mix_ratio(self, value):
self._mix_ratio = float(value)
if self._verbose:
print('{}: mix_ratio is set to {}'.format(self._name, self._mix_ratio))
@property
def save_infermodel_every_n_steps(self):
return self._save_infermodel_every_n_steps
@property
def expected_train_steps(self):
return self._expected_train_steps
@expected_train_steps.setter
def expected_train_steps(self, value):
self._expected_train_steps = value
self._expected_train_epochs = value / float(self._steps_pur_epoch)
@property
def expected_train_epochs(self):
return self._expected_train_epochs
@property
def cur_train_epoch(self):
return self._cur_train_epoch
@cur_train_epoch.setter
def cur_train_epoch(self, value):
self._cur_train_epoch = value
@property
def cur_train_step(self):
return self._cur_train_step
@cur_train_step.setter
def cur_train_step(self, value):
self._cur_train_step = value
if self._cur_train_step > self._steps_pur_epoch:
self._cur_train_epoch += 1
self._cur_train_step = 1
if self._is_target and self._cur_train_step + self._cur_train_epoch * self._steps_pur_epoch >= self._expected_train_steps:
self._train_finish = True
@property
def steps_pur_epoch(self):
return self._steps_pur_epoch
@steps_pur_epoch.setter
def steps_pur_epoch(self, value):
self._steps_pur_epoch = value
@property
def train_finish(self):
return self._train_finish
@property
def task_reuse_scope(self):
if self._task_reuse_scope is not None:
return self._task_reuse_scope
else:
raise ValueError("{}: task_reuse_scope is None".format(self._name))
@task_reuse_scope.setter
def task_reuse_scope(self, scope_name):
self._task_reuse_scope = str(scope_name)
if self._verbose:
print('{}: task_reuse_scope is set to {}'.format(self._name, self._task_reuse_scope))
def check_instances(insts):
"""to check ids, first_target"""
pass
def _check_ids():
pass
def _check_targets():
pass
def _check_reuse_scopes():
pass
...@@ -21,29 +21,46 @@ import time ...@@ -21,29 +21,46 @@ import time
import numpy as np import numpy as np
import paddlepalm.utils.basic_helper as helper import paddlepalm.utils.basic_helper as helper
from paddlepalm.utils import reader_helper, saver from paddlepalm.utils import reader_helper, saver
from paddlepalm.distribute import gpu_dev_count, data_feeder from paddlepalm.distribute import gpu_dev_count, data_feeder, decode_fake
# from paddlepalm.default_settings import * # from paddlepalm.default_settings import *
DEBUG=False DEBUG=False
class Trainer(object): class Trainer(object):
"""
The core unit to start a training/predicting session for single task. A trainer is to build computation graph, manage training and evaluation process, achieve model/checkpoint saving and pretrain_model/checkpoint loading.
"""
def __init__(self, name, reader, task_head, \ def __init__(self, name, mix_ratio=1.0, reuse_head_with=None):
mix_ratio=1.0, reuse_head_with=None, \ """Create a new trainer.
silent=False):
Args:
name: string. The name of the trainer(training task).
mix_ratio: sampling weight of this trainer in multi-task learning mode. Default is 1.0.
reuse_head_with: reuse parameters of task head with another trainer. Default is None, not reuse with others.
"""
self._name = name self._name = name
self._verbose = not silent
self._reader = reader
self._pred_reader = None self._pred_reader = None
self._task_head = task_head self._task_head = None
self._pred_head = pred_head self._pred_head = None
self._train_reader = None
self._predict_reader = None
self._train_iterator = None
self._predict_iterator = None
self._train_init = False
self._predict_init = False
self._check_save = lambda: False
# if save_predict_model: # if save_predict_model:
# self._save_predict_model = True # self._save_predict_model = True
# assert pred_head is not None, "pred_head is required to save predict model." # assert pred_head is not None, "pred_head is required to save predict model."
# self._pred_reader = reader.clone(phase='pred') # self._pred_reader = reader.clone(phase='predict')
# else: # else:
# assert pred_head is None, "You should set save_predict_model as True, or the pred_head is invalid." # assert pred_head is None, "You should set save_predict_model as True, or the pred_head is invalid."
# self._save_predict_model = False # self._save_predict_model = False
...@@ -58,20 +75,21 @@ class Trainer(object): ...@@ -58,20 +75,21 @@ class Trainer(object):
self._num_examples = 0 self._num_examples = 0
self._multi_task = False
self._as_auxilary = False
self._task_id = None
# training process management # training process management
self._mix_ratio = mix_ratio self._mix_ratio = mix_ratio
self._expected_train_steps = None self._expected_train_steps = None
self._expected_train_epochs = None self._expected_train_epochs = None
self._steps_pur_epoch = None self._steps_pur_epoch = None
self._pred_steps_pur_epoch = None
self._cur_train_epoch = 0 self._cur_train_epoch = 0
self._cur_train_step = 0 self._cur_train_step = 0
self._train_finish = False self._train_finish = False
# 存放不同运行阶段(train,eval,pred)的数据集reader,key为phase,value为Reader实例
# self._reader = {'train': reader, 'eval': None, 'pred': self._pred_reader}
# self._input_layer = None
self._inputname_to_varname = {} self._inputname_to_varname = {}
# self._task_layer = {'train': task_head, 'eval': None, 'pred': pred_head}
self._pred_input_name_list = [] self._pred_input_name_list = []
self._pred_input_varname_list = [] self._pred_input_varname_list = []
self._pred_fetch_name_list = [] self._pred_fetch_name_list = []
...@@ -89,41 +107,22 @@ class Trainer(object): ...@@ -89,41 +107,22 @@ class Trainer(object):
self._lock = False self._lock = False
self._build_forward = False self._build_forward = False
def build_predict_head(self, pred_backbone, pred_prog=None, pred_init_prog=None): def build_forward(self, backbone, task_head):
pred_task_attr_from_reader = helper.encode_inputs(self._pred_head.inputs_attrs['reader'], self.name) """
# pred_task_attr_from_reader = self._pred_head.inputs_attrs['reader'] Build forward computation graph for training, which usually built from input layer to loss node.
# _check_io(pred_backbone.inputs_attr, pred_reader.outputs_attr, in_name=bb_name+'_backbone', out_name='reader.pred') Args:
# _check_io(pred_parad.inputs_attrs['reader'], pred_reader.outputs_attr, in_name='task_paradigm.pred.reader', out_name='reader.pred') backbone: a Backbone object with phase == 'train', which is used to extract multi-level text features, e.g., contextual word embedding and sentence embedding.
# _check_io(pred_parad.inputs_attrs['backbone'], pred_backbone.outputs_attr, in_name='task_paradigm.pred.backbone', out_name=bb_name+'_backbone') head: a Head object with phase == 'train', which is used to build task specific output layers.
pred_input_names, pred_shape_and_dtypes, _ = reader_helper.merge_input_attrs(backbone.inputs_attr, pred_task_attr_from_reader, insert_taskid=False, insert_batchsize=False, insert_seqlen=False, insert_batchsize_x_seqlen=False)
pred_input_attrs = [[i, j, k] for i, (j,k) in zip(pred_input_names, pred_shape_and_dtypes)]
if pred_prog is None: Return:
pred_prog = fluid.Program() loss_var: a Variable object. The computational graph variable(node) of loss.
if pred_init_prog is None: """
pred_init_prog = fluid.Program()
with fluid.program_guard(pred_prog, pred_init_prog):
pred_net_inputs = reader_helper.create_net_inputs(pred_input_attrs)
# pred_bb_output_vars = pred_backbone.build(pred_net_inputs, scope_name='__paddlepalm_')
pred_bb_output_vars = pred_backbone.build(pred_net_inputs)
# prepare predict vars for saving inference model
with fluid.program_guard(pred_prog, pred_init_prog):
cur_inputs = helper.decode_inputs(pred_net_inputs, self.name)
# self.pred_input = cur_inputs
self._pred_input_name_list, self._pred_input_varname_list = \
zip(*[[k, v.name] for k,v in cur_inputs.items()])
pred_task_inputs = {'backbone': pred_bb_output_vars, 'reader': cur_inputs}
scope = self.name + '.'
with fluid.unique_name.guard(scope):
self._build_head(pred_task_inputs, phase='pred', scope=scope)
# assert not self._multi_task, "you cannot build_forward in trainer when a train is wrapper by MultiHeadTrainer."
self._task_head = task_head
self._backbone = backbone
def build_forward(self, backbone, pred_backbone=None, train_prog=None, train_init_prog=None, pred_prog=None, pred_init_prog=None):
# assert self._backbone is not None, "backbone is required for Trainer to build net forward to run with single task mode" # assert self._backbone is not None, "backbone is required for Trainer to build net forward to run with single task mode"
self._build_forward = True self._build_forward = True
...@@ -142,10 +141,11 @@ class Trainer(object): ...@@ -142,10 +141,11 @@ class Trainer(object):
# merge reader input attrs from backbone and task_instances # merge reader input attrs from backbone and task_instances
input_names, shape_and_dtypes, name_to_position = reader_helper.merge_input_attrs(backbone.inputs_attr, task_attr_from_reader, insert_taskid=False, insert_batchsize=False, insert_seqlen=False, insert_batchsize_x_seqlen=False) input_names, shape_and_dtypes, name_to_position = reader_helper.merge_input_attrs(backbone.inputs_attr, task_attr_from_reader, insert_taskid=False)
# shapes: [task_id, shapes_of_backbone, shapes_of_inst1, ..., shapes_of_instN] # shapes: [task_id, shapes_of_backbone, shapes_of_inst1, ..., shapes_of_instN]
self._shape_and_dtypes = shape_and_dtypes self._shape_and_dtypes = shape_and_dtypes
self._name_to_position = name_to_position self._name_to_position = name_to_position
self._input_names = input_names
if DEBUG: if DEBUG:
print('----- for debug -----') print('----- for debug -----')
...@@ -154,25 +154,24 @@ class Trainer(object): ...@@ -154,25 +154,24 @@ class Trainer(object):
print('joint input shape and dtypes:') print('joint input shape and dtypes:')
print(joint_shape_and_dtypes) print(joint_shape_and_dtypes)
input_attrs = [[i, j, k] for i, (j,k) in zip(input_names, shape_and_dtypes)] input_attrs = [[i, j, k] for i, (j,k) in zip(input_names, shape_and_dtypes)]
if train_prog is None: train_prog = fluid.Program()
train_prog = fluid.Program() train_init_prog = fluid.Program()
if train_init_prog is None:
train_init_prog = fluid.Program()
self._prog = train_prog
self._train_prog = train_prog self._train_prog = train_prog
self._train_init_prog = train_init_prog self._train_init_prog = train_init_prog
with fluid.program_guard(train_prog, train_init_prog): if not self._multi_task:
with fluid.program_guard(train_prog, train_init_prog):
net_inputs = reader_helper.create_net_inputs(input_attrs, async=False)
bb_output_vars = backbone.build(net_inputs)
else:
net_inputs = reader_helper.create_net_inputs(input_attrs, async=False) net_inputs = reader_helper.create_net_inputs(input_attrs, async=False)
self._net_inputs = net_inputs
# build backbone and task layers
# bb_output_vars = self._backbone.build(net_inputs, scope_name='__paddlepalm_')
bb_output_vars = backbone.build(net_inputs) bb_output_vars = backbone.build(net_inputs)
assert sorted(bb_output_vars.keys()) == sorted(backbone.outputs_attr.keys()) self._net_inputs = net_inputs
assert sorted(bb_output_vars.keys()) == sorted(backbone.outputs_attr.keys())
# self._bb_output_vars.keys
# fluid.framework.switch_main_program(train_prog) # fluid.framework.switch_main_program(train_prog)
# fluid.framework.switch_startup_program(train_init_prog) # fluid.framework.switch_startup_program(train_init_prog)
...@@ -183,9 +182,14 @@ class Trainer(object): ...@@ -183,9 +182,14 @@ class Trainer(object):
task_inputs['reader'] = task_inputs_from_reader task_inputs['reader'] = task_inputs_from_reader
scope = self.name+'.' scope = self.name+'.'
with fluid.program_guard(train_prog, train_init_prog): if not self._multi_task:
with fluid.program_guard(train_prog, train_init_prog):
with fluid.unique_name.guard(scope):
output_vars = self._build_head(task_inputs, phase='train', scope=scope)
else:
with fluid.unique_name.guard(scope): with fluid.unique_name.guard(scope):
output_vars = self._build_head(task_inputs, phase='train', scope=scope) output_vars = self._build_head(task_inputs, phase='train', scope=scope)
output_vars = {self.name+'.'+key: val for key, val in output_vars.items()} output_vars = {self.name+'.'+key: val for key, val in output_vars.items()}
old = len(task_output_vars) # for debug old = len(task_output_vars) # for debug
task_output_vars.update(output_vars) task_output_vars.update(output_vars)
...@@ -203,23 +207,106 @@ class Trainer(object): ...@@ -203,23 +207,106 @@ class Trainer(object):
# task_id_vec = layers.one_hot(task_id_var, num_instances) # task_id_vec = layers.one_hot(task_id_var, num_instances)
# losses = fluid.layers.concat([task_output_vars[inst.name+'/loss'] for inst in instances], axis=0) # losses = fluid.layers.concat([task_output_vars[inst.name+'/loss'] for inst in instances], axis=0)
# loss = layers.reduce_sum(task_id_vec * losses) # loss = layers.reduce_sum(task_id_vec * losses)
with fluid.program_guard(train_prog, train_init_prog): if not self._multi_task:
with fluid.program_guard(train_prog, train_init_prog):
loss_var = fluid.layers.reduce_sum(task_output_vars[self.name+'.loss'])
else:
loss_var = fluid.layers.reduce_sum(task_output_vars[self.name+'.loss']) loss_var = fluid.layers.reduce_sum(task_output_vars[self.name+'.loss'])
self._distribute_train_prog = fluid.CompiledProgram(self._train_prog).with_data_parallel(loss_name=loss_var.name) # for _id, block in enumerate(self._train_prog.blocks):
# for var in block.vars:
# print("[debug] : %d, %s" % (_id, var))
self._loss_var = loss_var
if not self._multi_task:
self._init_exe_prog(for_train=True)
return loss_var return loss_var
def build_backward(self, optimizer, weight_decay=None, use_ema=False, ema_decay=0.9999): def build_predict_forward(self, pred_backbone, pred_head):
"""
Build computation graph for evaluation and prediction.
Arguments:
- pred_backbone: a Backbone object with phase == 'predict'. For evaluating model during training, the predict backbone should keep the same with train backbone.
- pred_head: a Head object with phase == 'predict'. For evaluating model during training, the predict head should keep the same with train head.
Return:
- output_vars: dict type. Each value is a computational graph variable(node) argumented by pred_head outputs_attr.
"""
self._pred_head = pred_head
self._pred_backbone = pred_backbone
# self._pred_reader = self._reader.clone(phase='pred')
pred_task_attr_from_reader = helper.encode_inputs(self._pred_head.inputs_attrs['reader'], self.name)
# pred_task_attr_from_reader = self._pred_head.inputs_attrs['reader']
# _check_io(pred_backbone.inputs_attr, pred_reader.outputs_attr, in_name=bb_name+'_backbone', out_name='reader.pred')
# _check_io(pred_backbone.inputs_attr, pred_reader.outputs_attr, in_name=bb_name+'_backbone', out_name='reader.pred')
# _check_io(pred_parad.inputs_attrs['reader'], pred_reader.outputs_attr, in_name='task_paradigm.pred.reader', out_name='reader.pred')
# _check_io(pred_parad.inputs_attrs['backbone'], pred_backbone.outputs_attr, in_name='task_paradigm.pred.backbone', out_name=bb_name+'_backbone')
pred_input_names, pred_shape_and_dtypes, pred_name_to_position = reader_helper.merge_input_attrs(pred_backbone.inputs_attr, pred_task_attr_from_reader, insert_taskid=False)
pred_input_attrs = [[i, j, k] for i, (j,k) in zip(pred_input_names, pred_shape_and_dtypes)]
self._pred_shape_and_dtypes = pred_shape_and_dtypes
self._pred_name_to_position = pred_name_to_position
pred_prog = fluid.Program()
self._pred_prog = pred_prog
pred_init_prog = fluid.Program()
self._pred_init_prog = pred_init_prog
with fluid.program_guard(pred_prog, pred_init_prog):
pred_net_inputs = reader_helper.create_net_inputs(pred_input_attrs)
# pred_bb_output_vars = pred_backbone.build(pred_net_inputs, scope_name='__paddlepalm_')
pred_bb_output_vars = pred_backbone.build(pred_net_inputs)
self._pred_net_inputs = pred_net_inputs
# prepare predict vars for saving inference model
with fluid.program_guard(pred_prog, pred_init_prog):
cur_inputs = helper.decode_inputs(pred_net_inputs, self.name)
# self.pred_input = cur_inputs
self._pred_input_name_list, self._pred_input_varname_list = \
zip(*[[k, v.name] for k,v in cur_inputs.items()])
pred_task_inputs = {'backbone': pred_bb_output_vars, 'reader': cur_inputs}
scope = self.name + '.'
with fluid.unique_name.guard(scope):
output_vars = self._build_head(pred_task_inputs, phase='predict', scope=scope)
if output_vars is not None:
self._pred_fetch_name_list, self._pred_fetch_list = zip(*output_vars.items())
else:
self._pred_fetch_name_list = []
self._pred_fetch_var_list = []
if not self._multi_task:
self._init_exe_prog(for_train=False)
self._exe.run(self._pred_init_prog)
return output_vars
def build_backward(self, optimizer, weight_decay=None, use_ema=False, ema_decay=None):
"""
Build backward computation graph and training strategy.
Arguments:
- optimizer:
- weight_decay: optional, default is None (disable weight decay).
- use_ema: optional, default is False. The flag to control whether to apply Exponential Moving Average strategy on parameter updates.
- ema_decay: optional, default is None. Only works with use_ema == True. Control decay rate of EMA strategy.
"""
# assert not self._multi_task, "you cannot build_backward in trainer when a train is wrapper by MultiHeadTrainer."
# build optimizer # build optimizer
optimizer._set_prog(self._train_prog) assert self._loss_var is not None and self._train_init_prog is not None, "train graph not foung! You should build_forward first."
optimizer._set_prog(self._train_prog, self._train_init_prog)
with fluid.program_guard(self._train_prog, self._train_init_prog): with fluid.program_guard(self._train_prog, self._train_init_prog):
param_grads = optimizer.build() param_grads = optimizer._build()
if weight_decay is not None: if weight_decay is not None:
param_list = dict() param_list = dict()
for param in self._prog.global_block().all_parameters(): for param in self._train_prog.global_block().all_parameters():
param_list[param.name] = param * 1.0 param_list[param.name] = param * 1.0
param_list[param.name].stop_gradient = True param_list[param.name].stop_gradient = True
...@@ -247,73 +334,211 @@ class Trainer(object): ...@@ -247,73 +334,211 @@ class Trainer(object):
ema = fluid.optimizer.ExponentialMovingAverage(ema_decay) ema = fluid.optimizer.ExponentialMovingAverage(ema_decay)
ema.update() ema.update()
def load_data(self, input_file, file_format, batch_size, num_epochs=None, shuffle_train=True): # for bid, block in enumerate(self._train_prog.blocks):
# print('block id: '+str(bid))
# for var in block.vars:
# print("%d : %s" % (bid, var))
# print(self._train_prog)
self._exe.run(self._train_init_prog)
def set_as_aux(self):
"""Set the task in this trainer as auxilary task. \nCAUSIOUS: This API only works on multi-task learning mode. Each task is set as target task by default. """
self._as_auxilary = True
def fit_reader(self, reader, phase='train'):
"""
Bind a reader and loaded train/predict data to trainer.
Args:
reader: a Reader object. The running phase of the reader should be consistent with `phase` argument of this method.
phase: running phase. Currently support: train, predict.
"""
# assert not self._multi_task, "you cannot fit_reader in trainer when a train is wrapper by MultiHeadTrainer."
# load data # load data
print("preparing data...", end='')
self._reader._load_data(input_file=input_file, batch_size=batch_size, \ self._check_phase(phase)
num_epochs=num_epochs, file_format=file_format, \ assert self._shape_and_dtypes is not None or self._pred_shape_and_dtypes is not None, "You need to build_forward or build_predict_head first to prepare input features."
shuffle_train=shuffle_train)
self._num_examples = self._reader.num_examples
# 这里不确定是否要向上取整,需确认 # 这里不确定是否要向上取整,需确认
# tail = self._num_examples % batch_size > 0 # tail = self._num_examples % batch_size > 0
# self._steps_pur_epoch = self._num_examples // batch_size + 1 if tail else 0 # self._steps_pur_epoch = self._num_examples // batch_size + 1 if tail else 0
self._steps_pur_epoch = self._num_examples // batch_size
batch_size = reader._batch_size
self._num_epochs = reader.num_epochs
if phase == 'train':
self._train_reader = reader
self._steps_pur_epoch = reader.num_examples // batch_size // gpu_dev_count
shape_and_dtypes = self._shape_and_dtypes
name_to_position = self._name_to_position
if self._task_id is not None:
self._net_inputs['__task_id'] = self._task_id
net_inputs = self._net_inputs
self._train_batch_size = batch_size
self._num_examples = reader.num_examples
reader_helper.check_io(self._backbone.inputs_attr, reader.outputs_attr, in_name='backbone', out_name='reader(train)')
reader_helper.check_io(self._task_head.inputs_attrs['reader'], reader.outputs_attr, in_name='task_head(reader)', out_name='reader(train)')
reader_helper.check_io(self._task_head.inputs_attrs['backbone'], self._backbone.outputs_attr, in_name='task_head(backbone, train)', out_name='backbone')
elif phase == 'predict':
self._predict_reader = reader
tail = self._num_examples % batch_size > 0
self._pred_steps_pur_epoch = reader.num_examples // batch_size + 1 // gpu_dev_count if tail else 0
shape_and_dtypes = self._pred_shape_and_dtypes
name_to_position = self._pred_name_to_position
net_inputs = self._pred_net_inputs
self._predict_batch_size = batch_size
self._pred_num_examples = reader.num_examples
reader_helper.check_io(self._pred_backbone.inputs_attr, reader.outputs_attr, in_name='backbone', out_name='reader(predict)')
reader_helper.check_io(self._pred_head.inputs_attrs['reader'], reader.outputs_attr, in_name='task_head(reader)', out_name='reader(predict)')
reader_helper.check_io(self._pred_head.inputs_attrs['backbone'], self._pred_backbone.outputs_attr, in_name='task_head(backbone, predict)', out_name='backbone')
else:
raise NotImplementedError()
print('ok!') print('ok!')
# merge dataset iterators and create net input vars # merge dataset iterators and create net input vars
iterator = self._reader._iterator() iterator = reader._iterator()
prefix = self.name
# merge dataset iterators and create net input vars
iterator = reader._iterator()
prefix = self.name prefix = self.name
# 对yield出的数据进行runtime检查和适配 # 对yield出的数据进行runtime检查和适配
iterator_fn = reader_helper.create_iterator_fn(iterator, prefix, self._shape_and_dtypes, self._name_to_position, return_type='dict') iterator_fn = reader_helper.create_iterator_fn(iterator, prefix, shape_and_dtypes, name_to_position, return_type='dict')
feed_batch_process_fn = reader_helper.create_feed_batch_process_fn(self._net_inputs) self._raw_iterator_fn = iterator_fn
self._feed_batch_process_fn = feed_batch_process_fn feed_batch_process_fn = reader_helper.create_feed_batch_process_fn(net_inputs)
if gpu_dev_count > 1: if gpu_dev_count > 1:
distribute_feeder_fn = data_feeder(iterator_fn, feed_batch_process_fn) distribute_feeder_fn = data_feeder(iterator_fn, feed_batch_process_fn)
else: else:
distribute_feeder_fn = iterator_fn distribute_feeder_fn = iterator_fn()
return distribute_feeder_fn()
def random_init_params(self): if phase == 'train':
on_gpu = gpu_dev_count > 0 self._train_iterator = distribute_feeder_fn
self._exe = helper.build_executor(on_gpu) self._feed_batch_process_fn = feed_batch_process_fn
print('random init params...') elif phase == 'predict':
self._exe.run(self._train_init_prog) self._predict_iterator = distribute_feeder_fn
self._pred_feed_batch_process_fn = feed_batch_process_fn
def load_pretrain(self, model_path): # return distribute_feeder_fn()
def load_ckpt(self, model_path):
"""
load training checkpoint for further training or predicting.
Args:
model_path: the path of saved checkpoint/parameters.
"""
# load pretrain model (or ckpt) # load pretrain model (or ckpt)
assert self._exe is not None, "You need to random_init_params before load pretrain models." # assert self._exe is not None, "You need to random_init_params before load checkpoints."
# if phase == 'train' and not self._train_init:
# self._init_exe_prog(for_train=True)
# self._exe.run(self._train_init_prog)
# if phase == 'predict' and not self._predict_init:
# self._init_exe_prog(for_train=False)
# self._exe.run(self._pred_init_prog)
assert self._train_init_prog is not None or self._pred_init_prog is not None, "model graph not built. You should at least build_forward or build_predict_forward to load its checkpoint."
# if phase == 'train':
# assert self._train_init_prog is not None, "train graph not found! You should build_forward first before load checkpoint."
if self._train_init_prog is not None:
saver.init_pretraining_params(
self._exe,
model_path,
convert=False,
main_program=self._train_init_prog,
strict=True)
# elif phase == 'predict':
elif self._pred_init_prog is not None:
# assert self._pred_init_prog is not None, "predict graph not found! You should build_predict_head first before load checkpoint."
saver.init_pretraining_params(
self._exe,
model_path,
convert=False,
main_program=self._pred_init_prog,
strict=True)
else:
raise Exception("model not found. You should at least build_forward or build_predict_forward to load its checkpoint.")
def load_predict_model(self, model_path):
raise NotImplementedError()
def load_pretrain(self, model_path, convert=False):
"""
load pretrain models(backbone) for training.
Args:
model_path: the path of saved pretrained parameters.
"""
# load pretrain model (or ckpt)
# assert self._exe is not None, "You need to random_init_params before load pretrain models."
assert self._train_init_prog is not None, "training graph not found. You should at least build_forward to load its pretrained parameters."
saver.init_pretraining_params( saver.init_pretraining_params(
self._exe, self._exe,
model_path, model_path,
convert=convert,
main_program=self._train_init_prog) main_program=self._train_init_prog)
def set_predict_head(self): def set_saver(self, save_path, save_steps, save_type='ckpt'):
pass """
create a build-in saver into trainer. A saver will automatically save checkpoint or predict model every `save_steps` training steps.
Args:
save_path: a string. the path to save checkpoints or predict models.
save_steps: an integer. the frequency to save models.
save_type: a string. The type of saved model. Currently support checkpoint(ckpt) and predict model(predict), default is ckpt. If both two types are needed to save, you can set as "ckpt,predict".
def train(self, iterator, save_path=None, save_steps=None, save_type='ckpt', print_steps=5): """
save_type = save_type.split(',') save_type = save_type.split(',')
if 'predict' in save_type: if 'predict' in save_type:
assert self._pred_head is not None, "Predict head not found! You should call set_predict_head first if you want to save predict model." assert self._pred_head is not None, "Predict head not found! You should build_predict_head first if you want to save predict model."
assert save_path is not None and save_steps is not None, 'save_path and save_steps is required to save model.' assert save_path is not None and save_steps is not None, 'save_path and save_steps is required to save model.'
save_predict = True self._save_predict = True
if not os.path.exists(save_path): if not os.path.exists(save_path):
os.makedirs(save_path) os.makedirs(save_path)
else: else:
save_predict = False self._save_predict = False
if 'ckpt' in save_type: if 'ckpt' in save_type:
if save_path is not None and save_steps is not None: if save_path is not None and save_steps is not None:
save_ckpt = True self._save_ckpt = True
if not os.path.exists(save_path): if not os.path.exists(save_path):
os.makedirs(save_path) os.makedirs(save_path)
else: else:
"WARNING: save_path or save_steps is not set, model will not be saved during training." "WARNING: save_path or save_steps is not set, model will not be saved during training."
save_ckpt = False self._save_ckpt = False
else: else:
save_ckpt = False self._save_ckpt = False
def temp_func():
if (self._save_predict or self._save_ckpt) and self._cur_train_step % save_steps == 0:
if self._save_predict:
self._save(save_path, suffix='pred.step'+str(self._cur_train_step))
print('predict model has been saved at '+os.path.join(save_path, 'pred.step'+str(self._cur_train_step)))
if self._save_ckpt:
fluid.io.save_persistables(self._exe, os.path.join(save_path, 'ckpt.step'+str(self._cur_train_step)), self._train_prog)
print('checkpoint has been saved at '+os.path.join(save_path, 'ckpt.step'+str(self._cur_train_step)))
return True
else:
return False
self._check_save = temp_func
def train(self, print_steps=5):
"""
start training.
Args:
print_steps: int. Logging frequency of training message, e.g., current step, loss and speed.
"""
iterator = self._train_iterator
self._distribute_train_prog = fluid.CompiledProgram(self._train_prog).with_data_parallel(loss_name=self._loss_var.name)
# if save_path is not None or save_steps is not None: # if save_path is not None or save_steps is not None:
# assert self._save_predict_model, "If you want to save model, you need set save_predict_model=True when this trainer is built." # assert self._save_predict_model, "If you want to save model, you need set save_predict_model=True when this trainer is built."
...@@ -334,7 +559,7 @@ class Trainer(object): ...@@ -334,7 +559,7 @@ class Trainer(object):
rt_outputs = self.train_one_step(feed) rt_outputs = self.train_one_step(feed)
# if gpu_dev_count > 1: # if gpu_dev_count > 1:
# feed, mask = feed # feed, mask = feed
# rt_outputs = self.exe.run(self._train_prog, feed=feed, fetch_list=self._fetch_list) # rt_outputs = self._exe.run(self._train_prog, feed=feed, fetch_list=self._fetch_list)
# print(rt_outputs) # print(rt_outputs)
# print(len(rt_outputs)) # print(len(rt_outputs))
# if gpu_dev_count > 1: # if gpu_dev_count > 1:
...@@ -344,13 +569,8 @@ class Trainer(object): ...@@ -344,13 +569,8 @@ class Trainer(object):
# rt_outputs = {k:v for k,v in zip(self._fetch_names, rt_outputs)} # rt_outputs = {k:v for k,v in zip(self._fetch_names, rt_outputs)}
task_rt_outputs = {k[len(self.name+'.'):]: v for k,v in rt_outputs.items() if k.startswith(self.name+'.')} task_rt_outputs = {k[len(self.name+'.'):]: v for k,v in rt_outputs.items() if k.startswith(self.name+'.')}
self._task_head.postprocess(task_rt_outputs) self._task_head.batch_postprocess(task_rt_outputs)
self._cur_train_step += 1
self._cur_train_epoch = (self._cur_train_step-1) // self._steps_pur_epoch
# if self._save_predict_model and self._cur_train_step % save_steps == 0:
# self.save(save_path, suffix='.step'+str(self._cur_train_steps))
if print_steps > 0 and self._cur_train_step % print_steps == 0: if print_steps > 0 and self._cur_train_step % print_steps == 0:
loss = rt_outputs[self.name+'.loss'] loss = rt_outputs[self.name+'.loss']
...@@ -360,61 +580,197 @@ class Trainer(object): ...@@ -360,61 +580,197 @@ class Trainer(object):
time_cost = time_end - time_begin time_cost = time_end - time_begin
print("step {}/{} (epoch {}), loss: {:.3f}, speed: {:.2f} steps/s".format( print("step {}/{} (epoch {}), loss: {:.3f}, speed: {:.2f} steps/s".format(
(self._cur_train_step-1) % self._steps_pur_epoch + 1, self._steps_pur_epoch, self._cur_train_epoch, (self._cur_train_step-1) % self._steps_pur_epoch + 1 , self._steps_pur_epoch, self._cur_train_epoch,
loss, print_steps / time_cost)) loss, print_steps / time_cost))
time_begin = time.time() time_begin = time.time()
self._check_save()
# if cur_task.train_finish and cur_task.cur_train_step + cur_task.cur_train_epoch * cur_task.steps_pur_epoch == cur_task.expected_train_steps: # if cur_task.train_finish and cur_task.cur_train_step + cur_task.cur_train_epoch * cur_task.steps_pur_epoch == cur_task.expected_train_steps:
# print(cur_task.name+': train finished!') # print(cur_task.name+': train finished!')
# cur_task.save() # cur_task.save()
if (save_predict or save_ckpt) and self._cur_train_step % save_steps == 0: if self._num_epochs is None and not self._multi_task and self._cur_train_step == self._steps_pur_epoch:
if save_predict_model: break
self.save(save_path, suffix='pred.step'+str(global_step))
if save_ckpt:
fluid.io.save_persistables(self.exe, os.path.join(save_path, 'ckpt.step'+str(global_step)), self._train_prog)
print('checkpoint has been saved at '+os.path.join(save_path, 'ckpt.step'+str(global_step)))
# save_path = os.path.join(main_conf['save_path'], 'ckpt', # save_path = os.path.join(main_conf['save_path'], 'ckpt',
# "step_" + str(global_step)) # "step_" + str(global_step))
# fluid.io.save_persistables(self.exe, save_path, saver_program) # fluid.io.save_persistables(self.exe, save_path, saver_program)
# print('checkpoint has been saved at '+save_path)
# print("ALL tasks train finished, exiting...") # print("ALL tasks train finished, exiting...")
def predict(self, output_dir=None, print_steps=1000):
"""
start predicting.
Args:
output_dir: str. The path to save prediction results, default is None. If set as None, the results would output to screen directly.
print_steps: int. Logging frequency of predicting message, e.g., current progress and speed.
"""
iterator = self._predict_iterator
self._distribute_pred_prog = fluid.CompiledProgram(self._pred_prog).with_data_parallel()
if output_dir is not None and not os.path.exists(output_dir):
os.makedirs(output_dir)
time_begin = time.time()
cur_predict_step = 0
for feed in iterator:
rt_outputs = self.predict_one_batch(feed)
# rt_outputs = {k[len(self.name+'.'):]: v for k,v in rt_outputs.items() if k.startswith(self.name+'.')}
self._pred_head.batch_postprocess(rt_outputs)
cur_predict_step += 1
if print_steps > 0 and cur_predict_step % print_steps == 0:
time_end = time.time()
time_cost = time_end - time_begin
print("batch {}/{}, speed: {:.2f} steps/s".format(
cur_predict_step, self._pred_steps_pur_epoch,
print_steps / time_cost))
time_begin = time.time()
if self._pred_head.epoch_inputs_attrs:
reader_outputs = self._predict_reader.get_epoch_outputs()
else:
reader_outputs = None
results = self._pred_head.epoch_postprocess({'reader':reader_outputs}, output_dir=output_dir)
return results
def _check_phase(self, phase):
assert phase in ['train', 'predict'], "Supported phase: train, predict,"
def _set_multitask(self):
self._multi_task = True
def _set_task_id(self, task_id):
self._task_id = task_id
def _init_exe_prog(self, for_train=True):
if not self._train_init and not self._predict_init:
on_gpu = gpu_dev_count > 0
self._exe = helper.build_executor(on_gpu)
if for_train:
assert self._train_prog is not None, "train graph not found! You should build_forward first before you random init parameters."
self._train_init = True
else:
assert self._pred_prog is not None, "predict graph not found! You should build_predict_head first before you random init parameters."
self._predict_init = True
# def random_init_params(self):
# """
# randomly initialize model parameters.
# """
#
# if not self._train_init:
# self._init_exe_prog()
#
# print('random init params...')
# self._exe.run(self._train_init_prog)
def get_one_batch(self, phase='train'):
self._check_phase(phase)
if phase == 'train':
return next(self._train_reader)
elif phase == 'predict':
return next(self._predict_reader)
else:
raise NotImplementedError()
def _set_exe(self, exe):
self._exe = exe
def _set_dist_train(self, prog):
self._distribute_train_prog = prog
def _set_fetch_list(self, fetch_list):
self._fetch_list = fetch_list
# def train_one_step(self, batch, executor=None, distribute_train_prog=None, fetch_list=None):
def train_one_step(self, batch): def train_one_step(self, batch):
# exe = self._exe if executor is None else executor
# distribute_train_prog = self._distribute_train_prog if distribute_train_prog is None else distribute_train_prog
# fetch_list = self._fetch_list if fetch_list is None else fetch_list
exe = self._exe
distribute_train_prog = self._distribute_train_prog
fetch_list = self._fetch_list
if gpu_dev_count > 1: if gpu_dev_count > 1:
feed, mask = batch feed, mask = batch
rt_outputs = self.exe.run(self._distribute_train_prog, feed=feed, fetch_list=self._fetch_list) rt_outputs = exe.run(distribute_train_prog, feed=feed, fetch_list=fetch_list)
while mask.pop() == False: num_fakes = decode_fake(len(rt_outputs[0]), mask, self._train_batch_size)
rt_outputs.pop() for _ in range(num_fakes):
for item in rt_outputs:
item.pop()
else: else:
feed = self._feed_batch_process_fn(batch) feed = self._feed_batch_process_fn(batch)
rt_outputs = self._exe.run(self._distribute_train_prog, feed=feed, fetch_list=self._fetch_list) rt_outputs = exe.run(distribute_train_prog, feed=feed, fetch_list=fetch_list)
rt_outputs = {k:v for k,v in zip(self._fetch_names, rt_outputs)} rt_outputs = {k:v for k,v in zip(self._fetch_names, rt_outputs)}
self._cur_train_step += 1
self._cur_train_epoch = (self._cur_train_step-1) // self._steps_pur_epoch
return rt_outputs return rt_outputs
def predict_one_batch(self, batch):
if gpu_dev_count > 1:
feed, mask = batch
rt_outputs = self._exe.run(self._distribute_pred_prog, feed=feed, fetch_list=self._pred_fetch_list)
num_fakes = decode_fake(len(rt_outputs[0]), mask, self._batch_size)
for _ in range(num_fakes):
for item in rt_outputs:
item.pop()
else:
feed = self._pred_feed_batch_process_fn(batch)
rt_outputs = self._exe.run(self._distribute_pred_prog, feed=feed, fetch_list=self._pred_fetch_list)
rt_outputs = {k:v for k,v in zip(self._pred_fetch_name_list, rt_outputs)}
return rt_outputs
@property
def name(self):
return self._name
@property
def num_examples(self):
return self._num_examples
@property
def mix_ratio(self):
return self._mix_ratio
@mix_ratio.setter
def mix_ratio(self, value):
self._mix_ratio = value
@property
def num_epochs(self):
return self._num_epochs
@property
def cur_train_step(self):
return self._cur_train_step
@property
def cur_train_epoch(self):
return self._cur_train_epoch
@property
def steps_pur_epoch(self):
return self._steps_pur_epoch
def _build_head(self, net_inputs, phase, scope=""): def _build_head(self, net_inputs, phase, scope=""):
self._check_phase(phase)
if phase == 'train': if phase == 'train':
output_vars = self._task_head.build(net_inputs, scope_name=scope) output_vars = self._task_head.build(net_inputs, scope_name=scope)
if phase == 'pred': if phase == 'predict':
output_vars = self._pred_head.build(net_inputs, scope_name=scope) output_vars = self._pred_head.build(net_inputs, scope_name=scope)
if output_vars is not None:
self._pred_fetch_name_list, self._pred_fetch_var_list = zip(*output_vars.items())
else:
self._pred_fetch_name_list = []
self._pred_fetch_var_list = []
return output_vars return output_vars
def _postprocess(self, rt_outputs, phase):
return self._task_layer[phase].postprocess(rt_outputs)
def _epoch_postprocess(self, epoch_inputs, phase):
return self._task_layer[phase].epoch_postprocess(epoch_inputs)
def save(self, save_path, suffix=None): def _save(self, save_path, suffix=None):
# dirpath = save_path.rstrip('/').rstrip('\\') + suffix # dirpath = save_path.rstrip('/').rstrip('\\') + suffix
if suffix is not None: if suffix is not None:
dirpath = os.path.join(save_path, suffix) dirpath = os.path.join(save_path, suffix)
...@@ -422,7 +778,7 @@ class Trainer(object): ...@@ -422,7 +778,7 @@ class Trainer(object):
dirpath = save_path dirpath = save_path
self._pred_input_varname_list = [str(i) for i in self._pred_input_varname_list] self._pred_input_varname_list = [str(i) for i in self._pred_input_varname_list]
prog = fluid.default_main_program().clone() prog = self._pred_prog.clone()
fluid.io.save_inference_model(dirpath, self._pred_input_varname_list, self._pred_fetch_var_list, self._exe, prog) fluid.io.save_inference_model(dirpath, self._pred_input_varname_list, self._pred_fetch_var_list, self._exe, prog)
conf = {} conf = {}
...@@ -435,6 +791,7 @@ class Trainer(object): ...@@ -435,6 +791,7 @@ class Trainer(object):
writer.write(json.dumps(conf, indent=1)) writer.write(json.dumps(conf, indent=1))
print(self._name + ': predict model saved at ' + dirpath) print(self._name + ': predict model saved at ' + dirpath)
def _load(self, infer_model_path=None): def _load(self, infer_model_path=None):
if infer_model_path is None: if infer_model_path is None:
infer_model_path = self._save_infermodel_path infer_model_path = self._save_infermodel_path
...@@ -446,97 +803,3 @@ class Trainer(object): ...@@ -446,97 +803,3 @@ class Trainer(object):
print(self._name+': inference model loaded from ' + infer_model_path) print(self._name+': inference model loaded from ' + infer_model_path)
return pred_prog return pred_prog
@property
def name(self):
return self._name
@property
def num_examples(self):
return self._num_examples
# @property
# def _pred_input(self):
# return zip(*[self._pred_input_name_list, self._pred_input_varname_list])
# @_pred_input.setter
# def _pred_input(self, val):
# assert isinstance(val, dict)
# self._pred_input_name_list, self._pred_input_varname_list = \
# zip(*[[k, v.name] for k,v in val.items()])
# @property
# def _pred_fetch_list(self):
# return [self._pred_fetch_name_list, self._pred_fetch_var_list]
@property
def mix_ratio(self):
if self._mix_ratio is not None:
return self._mix_ratio
else:
raise ValueError("{}: mix_ratio is None".format(self._name))
@mix_ratio.setter
def mix_ratio(self, value):
self._mix_ratio = float(value)
if self._verbose:
print('{}: mix_ratio is set to {}'.format(self._name, self._mix_ratio))
@property
def save_infermodel_every_n_steps(self):
return self._save_infermodel_every_n_steps
@save_infermodel_every_n_steps.setter
def save_infermodel_every_n_steps(self, val):
self._save_infermodel_every_n_steps = val
@property
def expected_train_steps(self):
return self._expected_train_steps
@expected_train_steps.setter
def expected_train_steps(self, value):
self._expected_train_steps = value
self._expected_train_epochs = value / float(self._steps_pur_epoch)
@property
def expected_train_epochs(self):
return self._expected_train_epochs
@property
def cur_train_epoch(self):
return self._cur_train_epoch
@property
def cur_train_step(self):
return self._cur_train_step
# @cur_train_step.setter
# def _cur_train_step(self, value):
# self._cur_train_step = value
# if self._cur_train_step > self._steps_pur_epoch:
# self._cur_train_epoch += 1
# self._cur_train_step = 1
# if self._is_target and self._cur_train_step + self._cur_train_epoch * self._steps_pur_epoch >= self._expected_train_steps:
# self._train_finish = True
@property
def steps_pur_epoch(self):
return self._steps_pur_epoch
@steps_pur_epoch.setter
def steps_pur_epoch(self, value):
self._steps_pur_epoch = value
@property
def train_finish(self):
return self._train_finish
def tasklayer_reuse_with(self, task):
assert isinstance(task, Task)
if self._lock:
raise Exception('you can only set tasklayer reuses BEFORE Controller created.')
self._task_reuse_scope = task.name
def _set_lock(self):
self._lock = True
...@@ -3,6 +3,7 @@ import os ...@@ -3,6 +3,7 @@ import os
import json import json
import yaml import yaml
from config_helper import PDConfig from config_helper import PDConfig
import logging
from paddle import fluid from paddle import fluid
def get_basename(f): def get_basename(f):
......
...@@ -16,6 +16,7 @@ ...@@ -16,6 +16,7 @@
import os import os
import sys import sys
import random import random
import logging
import numpy as np import numpy as np
import paddle import paddle
from paddle import fluid from paddle import fluid
...@@ -35,10 +36,43 @@ def create_feed_batch_process_fn(net_inputs): ...@@ -35,10 +36,43 @@ def create_feed_batch_process_fn(net_inputs):
return feed_batch_process_fn return feed_batch_process_fn
# def create_multihead_feed_batch_process_fn(net_inputs):
#
# def feed_batch_process_fn(data, id=-1):
# # temps = {}
# # for i in range(len(net_inputs)):
# temp = {}
# inputs = net_inputs[id] if id != -1 else net_inputs
#
# for q, var in inputs.items():
# if isinstance(var, str) or isinstance(var, unicode):
# temp[var] = data[q]
# else:
# temp[var.name] = data[q]
# # temps[i] = temp
#
# return temp
#
# return feed_batch_process_fn
def check_io(in_attr, out_attr, strict=False, in_name="left", out_name="right"):
for name, attr in in_attr.items():
assert name in out_attr, in_name+': '+name+' not found in '+out_name
if attr != out_attr[name]:
if strict:
raise ValueError(name+': shape or dtype not consistent!')
else:
logging.warning('{}: shape or dtype not consistent!\n{}:\n{}\n{}:\n{}'.format(name, in_name, attr, out_name, out_attr[name]))
def _check_and_adapt_shape_dtype(rt_val, attr, message=""): def _check_and_adapt_shape_dtype(rt_val, attr, message=""):
if not isinstance(rt_val, np.ndarray): if not isinstance(rt_val, np.ndarray):
if rt_val is None:
raise Exception(message+": get None value. ")
rt_val = np.array(rt_val) rt_val = np.array(rt_val)
assert rt_val.dtype != np.dtype('O'), "yielded data is not a valid tensor(number of elements on some dimension may differ)." assert rt_val.dtype != np.dtype('O'), message+"yielded data is not a valid tensor (number of elements on some dimension may not consistent): {}".format(rt_val)
if rt_val.dtype == np.dtype('float64'): if rt_val.dtype == np.dtype('float64'):
rt_val = rt_val.astype('float32') rt_val = rt_val.astype('float32')
...@@ -126,6 +160,41 @@ def create_iterator_fn(iterator, iterator_prefix, shape_and_dtypes, outname_to_p ...@@ -126,6 +160,41 @@ def create_iterator_fn(iterator, iterator_prefix, shape_and_dtypes, outname_to_p
return iterator_fn return iterator_fn
def create_multihead_iterator_fn(iterators, iterator_prefixes, joint_shape_and_dtypes, mrs, names, outname_to_pos, dev_count=1, keep_one_task=True):
task_ids = range(len(iterators))
weights = [mr / float(sum(mrs)) for mr in mrs]
if not keep_one_task:
dev_count = 1
def iterator():
while True:
id = np.random.choice(task_ids, p=weights)
task_id_tensor = np.array([id]).astype("int64")
for i in range(dev_count):
outputs = next(iterators[id]) # dict type
prefix = iterator_prefixes[id]
results = {}
results['__task_id'] = task_id_tensor
for outname, val in outputs.items():
task_outname = prefix + '.' + outname
if outname in names[id]:
idx = outname_to_pos[id][outname]
val = _check_and_adapt_shape_dtype(val, joint_shape_and_dtypes[id][idx], message=outname+': ')
results[outname] = val
if task_outname in names[id]:
idx = outname_to_pos[id][task_outname]
val = _check_and_adapt_shape_dtype(val, joint_shape_and_dtypes[id][idx], message=task_outname+': ')
results[task_outname] = val
yield results
return iterator
def create_joint_iterator_fn(iterators, iterator_prefixes, joint_shape_and_dtypes, mrs, outname_to_pos, dev_count=1, keep_one_task=True, verbose=0): def create_joint_iterator_fn(iterators, iterator_prefixes, joint_shape_and_dtypes, mrs, outname_to_pos, dev_count=1, keep_one_task=True, verbose=0):
""" """
...@@ -229,7 +298,7 @@ def create_joint_iterator_fn(iterators, iterator_prefixes, joint_shape_and_dtype ...@@ -229,7 +298,7 @@ def create_joint_iterator_fn(iterators, iterator_prefixes, joint_shape_and_dtype
return iterator return iterator
def merge_input_attrs(backbone_attr, task_attrs, insert_taskid=True, insert_batchsize=True, insert_seqlen=True, insert_batchsize_x_seqlen=True): def merge_input_attrs(backbone_attr, task_attrs, insert_taskid=True, insert_batchsize=False, insert_seqlen=False, insert_batchsize_x_seqlen=False):
""" """
Args: Args:
task_attrs(list[dict]|dict): task input attributes, key=attr_name, val=[shape, dtype], support single task and nested tasks task_attrs(list[dict]|dict): task input attributes, key=attr_name, val=[shape, dtype], support single task and nested tasks
...@@ -241,7 +310,7 @@ def merge_input_attrs(backbone_attr, task_attrs, insert_taskid=True, insert_batc ...@@ -241,7 +310,7 @@ def merge_input_attrs(backbone_attr, task_attrs, insert_taskid=True, insert_batc
names = [] names = []
start = 0 start = 0
if insert_taskid: if insert_taskid:
ret.append(([1,1], 'int64')) ret.append(([1, 1], 'int64'))
names.append('__task_id') names.append('__task_id')
start += 1 start += 1
...@@ -273,5 +342,3 @@ def merge_input_attrs(backbone_attr, task_attrs, insert_taskid=True, insert_batc ...@@ -273,5 +342,3 @@ def merge_input_attrs(backbone_attr, task_attrs, insert_taskid=True, insert_batc
for pos, k in enumerate(task_names, start=len(name_to_position)): for pos, k in enumerate(task_names, start=len(name_to_position)):
name_to_position[k] = pos name_to_position[k] = pos
return names, ret, name_to_position return names, ret, name_to_position
...@@ -46,26 +46,35 @@ def init_checkpoint(exe, init_checkpoint_path, main_program, skip_list = []): ...@@ -46,26 +46,35 @@ def init_checkpoint(exe, init_checkpoint_path, main_program, skip_list = []):
def init_pretraining_params(exe, def init_pretraining_params(exe,
pretraining_params_path, pretraining_params_path,
main_program): convert,
main_program,
strict=False):
assert os.path.exists(pretraining_params_path assert os.path.exists(pretraining_params_path
), "[%s] cann't be found." % pretraining_params_path ), "[%s] cann't be found." % pretraining_params_path
if convert:
assert os.path.exists(os.path.join(pretraining_params_path, '__palmmodel__')), "__palmmodel__ not found."
assert os.path.exists(os.path.join(pretraining_params_path, '__palmmodel__')), "__palmmodel__ not found." with tarfile.open(os.path.join(pretraining_params_path, '__palmmodel__'), 'r') as f:
print("Loading pretraining parameters from {}...".format( f.extractall(os.path.join(pretraining_params_path, '.temp'))
pretraining_params_path))
log_path = os.path.join(pretraining_params_path, '__palmmodel__')
pretraining_params_path = os.path.join(pretraining_params_path, '.temp')
with tarfile.open(os.path.join(pretraining_params_path, '__palmmodel__'), 'r') as f: else:
f.extractall(os.path.join(pretraining_params_path, '.temp')) log_path = pretraining_params_path
log_path = os.path.join(pretraining_params_path, '__palmmodel__') print("Loading pretraining parameters from {}...".format(pretraining_params_path))
pretraining_params_path = os.path.join(pretraining_params_path, '.temp')
def existed_params(var): def existed_params(var):
if not isinstance(var, fluid.framework.Parameter): if not isinstance(var, fluid.framework.Parameter):
return False return False
if not os.path.exists(os.path.join(pretraining_params_path, var.name)): if not os.path.exists(os.path.join(pretraining_params_path, var.name)):
print('Warning: {} not found in {}.'.format(var.name, log_path)) if strict:
raise Exception('Error: {} not found in {}.'.format(var.name, log_path))
else:
print('Warning: {} not found in {}.'.format(var.name, log_path))
return os.path.exists(os.path.join(pretraining_params_path, var.name)) return os.path.exists(os.path.join(pretraining_params_path, var.name))
fluid.io.load_vars( fluid.io.load_vars(
...@@ -73,8 +82,8 @@ def init_pretraining_params(exe, ...@@ -73,8 +82,8 @@ def init_pretraining_params(exe,
pretraining_params_path, pretraining_params_path,
main_program=main_program, main_program=main_program,
predicate=existed_params) predicate=existed_params)
if convert:
shutil.rmtree(pretraining_params_path) shutil.rmtree(pretraining_params_path)
print('') print('')
# -*- coding: UTF-8 -*-
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from paddlepalm.interface import reader
from paddlepalm.reader.utils.reader4ernie import ClassifyReader
class Reader(reader):
def __init__(self, config, phase='train', dev_count=1, print_prefix=''):
"""
Args:
phase: train, eval, pred
"""
self._is_training = phase == 'train'
reader = ClassifyReader(config['vocab_path'],
max_seq_len=config['max_seq_len'],
do_lower_case=config.get('do_lower_case', False),
for_cn=config.get('for_cn', False),
random_seed=config.get('seed', None))
self._reader = reader
self._dev_count = dev_count
self._batch_size = config['batch_size']
self._max_seq_len = config['max_seq_len']
self._num_classes = config['n_classes']
if phase == 'train':
self._input_file = config['train_file']
self._num_epochs = None # 防止iteartor终止
self._shuffle = config.get('shuffle', True)
# self._shuffle_buffer = config.get('shuffle_buffer', 5000)
elif phase == 'eval':
self._input_file = config['dev_file']
self._num_epochs = 1
self._shuffle = False
self._batch_size = config.get('pred_batch_size', self._batch_size)
elif phase == 'pred':
self._input_file = config['pred_file']
self._num_epochs = 1
self._shuffle = False
self._batch_size = config.get('pred_batch_size', self._batch_size)
self._phase = phase
# self._batch_size =
self._print_first_n = config.get('print_first_n', 0)
@property
def outputs_attr(self):
if self._is_training:
return {"token_ids": [[-1, -1, 1], 'int64'],
"position_ids": [[-1, -1, 1], 'int64'],
"segment_ids": [[-1, -1, 1], 'int64'],
"input_mask": [[-1, -1, 1], 'float32'],
"label_ids": [[-1,1], 'int64'],
"task_ids": [[-1, -1, 1], 'int64']
}
else:
return {"token_ids": [[-1, -1, 1], 'int64'],
"position_ids": [[-1, -1, 1], 'int64'],
"segment_ids": [[-1, -1, 1], 'int64'],
"task_ids": [[-1, -1, 1], 'int64'],
"input_mask": [[-1, -1, 1], 'float32']
}
def load_data(self):
self._data_generator = self._reader.data_generator(self._input_file, self._batch_size, self._num_epochs, dev_count=self._dev_count, shuffle=self._shuffle, phase=self._phase)
def iterator(self):
def list_to_dict(x):
names = ['token_ids', 'segment_ids', 'position_ids', 'task_ids', 'input_mask',
'label_ids', 'unique_ids']
outputs = {n: i for n,i in zip(names, x)}
del outputs['unique_ids']
if not self._is_training:
del outputs['label_ids']
return outputs
for batch in self._data_generator():
yield list_to_dict(batch)
def get_epoch_outputs(self):
return {'examples': self._reader.get_examples(self._phase),
'features': self._reader.get_features(self._phase)}
@property
def num_examples(self):
return self._reader.get_num_examples(phase=self._phase)
# -*- coding: UTF-8 -*-
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from paddlepalm.interface import reader
from paddlepalm.reader.utils.reader4ernie import ClassifyReader
def match(vocab_path, max_seq_len, do_lower_case=True, phase, dev_count=1):
config={
xxx}
return Reader(config())
class Reader(reader):
def __init__(self, config, phase='train', dev_count=1, print_prefix=''):
"""
Args:
phase: train, eval, pred
"""
self._is_training = phase == 'train'
reader = ClassifyReader(config['vocab_path'],
max_seq_len=config['max_seq_len'],
do_lower_case=config.get('do_lower_case', True),
for_cn=config.get('for_cn', False),
random_seed=config.get('seed', None))
self._reader = reader
self._dev_count = dev_count
self._batch_size = config['batch_size']
self._max_seq_len = config['max_seq_len']
if phase == 'train':
self._input_file = config['train_file']
self._num_epochs = None # 防止iteartor终止
self._shuffle = config.get('shuffle', True)
self._shuffle_buffer = config.get('shuffle_buffer', 5000)
elif phase == 'eval':
self._input_file = config['dev_file']
self._num_epochs = 1
self._shuffle = False
self._batch_size = config.get('pred_batch_size', self._batch_size)
elif phase == 'pred':
self._input_file = config['pred_file']
self._num_epochs = 1
self._shuffle = False
self._batch_size = config.get('pred_batch_size', self._batch_size)
self._phase = phase
# self._batch_size =
self._print_first_n = config.get('print_first_n', 1)
@property
def outputs_attr(self):
if self._is_training:
return {"token_ids": [[-1, -1, 1], 'int64'],
"position_ids": [[-1, -1, 1], 'int64'],
"segment_ids": [[-1, -1, 1], 'int64'],
"input_mask": [[-1, -1, 1], 'float32'],
"label_ids": [[-1,1], 'int64'],
"task_ids": [[-1, -1, 1], 'int64']
}
else:
return {"token_ids": [[-1, -1, 1], 'int64'],
"position_ids": [[-1, -1, 1], 'int64'],
"segment_ids": [[-1, -1, 1], 'int64'],
"task_ids": [[-1, -1, 1], 'int64'],
"input_mask": [[-1, -1, 1], 'float32']
}
def load_data(self):
self._data_generator = self._reader.data_generator(self._input_file, self._batch_size, self._num_epochs, dev_count=self._dev_count, shuffle=self._shuffle, phase=self._phase)
def iterator(self):
def list_to_dict(x):
names = ['token_ids', 'segment_ids', 'position_ids', 'task_ids', 'input_mask',
'label_ids', 'unique_ids']
outputs = {n: i for n,i in zip(names, x)}
del outputs['unique_ids']
if not self._is_training:
del outputs['label_ids']
return outputs
for batch in self._data_generator():
yield list_to_dict(batch)
@property
def num_examples(self):
return self._reader.get_num_examples(phase=self._phase)
# -*- coding: UTF-8 -*-
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from paddlepalm.interface import reader
from paddlepalm.reader.utils.reader4ernie import MaskLMReader
import numpy as np
class Reader(reader):
def __init__(self, config, phase='train', dev_count=1, print_prefix=''):
"""
Args:
phase: train, eval, pred
"""
self._is_training = phase == 'train'
reader = MaskLMReader(config['vocab_path'],
max_seq_len=config['max_seq_len'],
do_lower_case=config.get('do_lower_case', False),
for_cn=config.get('for_cn', False),
random_seed=config.get('seed', None))
self._reader = reader
self._dev_count = dev_count
self._batch_size = config['batch_size']
self._max_seq_len = config['max_seq_len']
if phase == 'train':
self._input_file = config['train_file']
self._num_epochs = None # 防止iteartor终止
self._shuffle = config.get('shuffle', True)
self._shuffle_buffer = config.get('shuffle_buffer', 5000)
elif phase == 'eval':
self._input_file = config['dev_file']
self._num_epochs = 1
self._shuffle = False
self._batch_size = config.get('pred_batch_size', self._batch_size)
elif phase == 'pred':
self._input_file = config['pred_file']
self._num_epochs = 1
self._shuffle = False
self._batch_size = config.get('pred_batch_size', self._batch_size)
self._phase = phase
# self._batch_size =
self._print_first_n = config.get('print_first_n', 1)
@property
def outputs_attr(self):
return {"token_ids": [[-1, -1, 1], 'int64'],
"position_ids": [[-1, -1, 1], 'int64'],
"segment_ids": [[-1, -1, 1], 'int64'],
"input_mask": [[-1, -1, 1], 'float32'],
"task_ids": [[-1, -1, 1], 'int64'],
"mask_label": [[-1, 1], 'int64'],
"mask_pos": [[-1, 1], 'int64'],
}
def load_data(self):
self._data_generator = self._reader.data_generator(self._input_file, self._batch_size, self._num_epochs, dev_count=self._dev_count, shuffle=self._shuffle, phase=self._phase)
def iterator(self):
def list_to_dict(x):
names = ['token_ids', 'position_ids', 'segment_ids', 'input_mask',
'task_ids', 'mask_label', 'mask_pos']
outputs = {n: i for n,i in zip(names, x)}
# outputs['batchsize_x_seqlen'] = [self._batch_size * len(outputs['token_ids'][0]) - 1]
return outputs
for batch in self._data_generator():
# print(np.shape(list_to_dict(batch)['token_ids']))
# print(list_to_dict(batch)['mask_label'].tolist())
yield list_to_dict(batch)
def get_epoch_outputs(self):
return {'examples': self._reader.get_examples(self._phase),
'features': self._reader.get_features(self._phase)}
@property
def num_examples(self):
return self._reader.get_num_examples(phase=self._phase)
# -*- coding: UTF-8 -*-
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from paddlepalm.interface import reader
from paddlepalm.reader.utils.reader4ernie import MRCReader
class Reader(reader):
def __init__(self, config, phase='train', dev_count=1, print_prefix=''):
"""
Args:
phase: train, eval, pred
"""
self._is_training = phase == 'train'
reader = MRCReader(config['vocab_path'],
max_seq_len=config['max_seq_len'],
do_lower_case=config.get('do_lower_case', False),
tokenizer='FullTokenizer',
for_cn=config.get('for_cn', False),
doc_stride=config['doc_stride'],
max_query_length=config['max_query_len'],
random_seed=config.get('seed', None))
self._reader = reader
self._dev_count = dev_count
self._batch_size = config['batch_size']
self._max_seq_len = config['max_seq_len']
if phase == 'train':
self._input_file = config['train_file']
# self._num_epochs = config['num_epochs']
self._num_epochs = None # 防止iteartor终止
self._shuffle = config.get('shuffle', True)
self._shuffle_buffer = config.get('shuffle_buffer', 5000)
if phase == 'eval':
self._input_file = config['dev_file']
self._num_epochs = 1
self._shuffle = False
self._batch_size = config.get('pred_batch_size', self._batch_size)
elif phase == 'pred':
self._input_file = config['pred_file']
self._num_epochs = 1
self._shuffle = False
self._batch_size = config.get('pred_batch_size', self._batch_size)
self._phase = phase
# self._batch_size =
self._print_first_n = config.get('print_first_n', 1)
# TODO: without slide window version
self._with_slide_window = config.get('with_slide_window', False)
@property
def outputs_attr(self):
if self._is_training:
return {"token_ids": [[-1, -1, 1], 'int64'],
"position_ids": [[-1, -1, 1], 'int64'],
"segment_ids": [[-1, -1, 1], 'int64'],
"input_mask": [[-1, -1, 1], 'float32'],
"start_positions": [[-1, 1], 'int64'],
"end_positions": [[-1, 1], 'int64'],
"task_ids": [[-1, -1, 1], 'int64']
}
else:
return {"token_ids": [[-1, -1, 1], 'int64'],
"position_ids": [[-1, -1, 1], 'int64'],
"segment_ids": [[-1, -1, 1], 'int64'],
"task_ids": [[-1, -1, 1], 'int64'],
"input_mask": [[-1, -1, 1], 'float32'],
"unique_ids": [[-1, 1], 'int64']
}
@property
def epoch_outputs_attr(self):
if not self._is_training:
return {"examples": None,
"features": None}
def load_data(self):
self._data_generator = self._reader.data_generator(self._input_file, self._batch_size, self._num_epochs, dev_count=self._dev_count, shuffle=self._shuffle, phase=self._phase)
def iterator(self):
def list_to_dict(x):
names = ['token_ids', 'segment_ids', 'position_ids', 'task_ids', 'input_mask',
'start_positions', 'end_positions', 'unique_ids']
outputs = {n: i for n,i in zip(names, x)}
if self._is_training:
del outputs['unique_ids']
else:
del outputs['start_positions']
del outputs['end_positions']
return outputs
for batch in self._data_generator():
yield list_to_dict(batch)
def get_epoch_outputs(self):
return {'examples': self._reader.get_examples(self._phase),
'features': self._reader.get_features(self._phase)}
@property
def num_examples(self):
return self._reader.get_num_examples(phase=self._phase)
# -*- coding: UTF-8 -*-
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Mask, padding and batching."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
def mask(batch_tokens, total_token_num, vocab_size, CLS=1, SEP=2, MASK=3):
"""
Add mask for batch_tokens, return out, mask_label, mask_pos;
Note: mask_pos responding the batch_tokens after padded;
"""
max_len = max([len(sent) for sent in batch_tokens])
mask_label = []
mask_pos = []
prob_mask = np.random.rand(total_token_num)
# Note: the first token is [CLS], so [low=1]
replace_ids = np.random.randint(1, high=vocab_size, size=total_token_num)
pre_sent_len = 0
prob_index = 0
for sent_index, sent in enumerate(batch_tokens):
mask_flag = False
prob_index += pre_sent_len
for token_index, token in enumerate(sent):
prob = prob_mask[prob_index + token_index]
if prob > 0.15:
continue
elif 0.03 < prob <= 0.15:
# mask
if token != SEP and token != CLS:
mask_label.append(sent[token_index])
sent[token_index] = MASK
mask_flag = True
mask_pos.append(sent_index * max_len + token_index)
elif 0.015 < prob <= 0.03:
# random replace
if token != SEP and token != CLS:
mask_label.append(sent[token_index])
sent[token_index] = replace_ids[prob_index + token_index]
mask_flag = True
mask_pos.append(sent_index * max_len + token_index)
else:
# keep the original token
if token != SEP and token != CLS:
mask_label.append(sent[token_index])
mask_pos.append(sent_index * max_len + token_index)
pre_sent_len = len(sent)
# ensure at least mask one word in a sentence
while not mask_flag:
token_index = int(np.random.randint(1, high=len(sent) - 1, size=1))
if sent[token_index] != SEP and sent[token_index] != CLS:
mask_label.append(sent[token_index])
sent[token_index] = MASK
mask_flag = True
mask_pos.append(sent_index * max_len + token_index)
mask_label = np.array(mask_label).astype("int64").reshape([-1, 1])
mask_pos = np.array(mask_pos).astype("int64").reshape([-1, 1])
return batch_tokens, mask_label, mask_pos
def prepare_batch_data(insts,
total_token_num,
max_len=None,
voc_size=0,
pad_id=None,
cls_id=None,
sep_id=None,
mask_id=None,
return_input_mask=True,
return_max_len=True,
return_num_token=False):
"""
1. generate Tensor of data
2. generate Tensor of position
3. generate self attention mask, [shape: batch_size * max_len * max_len]
"""
batch_src_ids = [inst[0] for inst in insts]
batch_sent_ids = [inst[1] for inst in insts]
batch_pos_ids = [inst[2] for inst in insts]
labels_list = []
# compatible with mrqa, whose example includes start/end positions,
# or unique id
for i in range(3, len(insts[0]), 1):
labels = [inst[i] for inst in insts]
labels = np.array(labels).astype("int64").reshape([-1, 1])
labels_list.append(labels)
# First step: do mask without padding
if mask_id >= 0:
out, mask_label, mask_pos = mask(
batch_src_ids,
total_token_num,
vocab_size=voc_size,
CLS=cls_id,
SEP=sep_id,
MASK=mask_id)
else:
out = batch_src_ids
# Second step: padding
src_id, self_input_mask = pad_batch_data(
out,
max_len=max_len,
pad_idx=pad_id, return_input_mask=True)
pos_id = pad_batch_data(
batch_pos_ids,
max_len=max_len,
pad_idx=pad_id,
return_pos=False,
return_input_mask=False)
sent_id = pad_batch_data(
batch_sent_ids,
max_len=max_len,
pad_idx=pad_id,
return_pos=False,
return_input_mask=False)
if mask_id >= 0:
return_list = [
src_id, pos_id, sent_id, self_input_mask, mask_label, mask_pos
] + labels_list
else:
return_list = [src_id, pos_id, sent_id, self_input_mask] + labels_list
return return_list if len(return_list) > 1 else return_list[0]
def pad_batch_data(insts,
max_len=None,
pad_idx=0,
return_pos=False,
return_input_mask=False,
return_max_len=False,
return_num_token=False):
"""
Pad the instances to the max sequence length in batch, and generate the
corresponding position data and input mask.
"""
return_list = []
if max_len is None:
max_len = max(len(inst) for inst in insts)
# Any token included in dict can be used to pad, since the paddings' loss
# will be masked out by weights and make no effect on parameter gradients.
inst_data = np.array([
list(inst) + list([pad_idx] * (max_len - len(inst))) for inst in insts
])
return_list += [inst_data.astype("int64").reshape([-1, max_len, 1])]
# position data
if return_pos:
inst_pos = np.array([
list(range(0, len(inst))) + [pad_idx] * (max_len - len(inst))
for inst in insts
])
return_list += [inst_pos.astype("int64").reshape([-1, max_len, 1])]
if return_input_mask:
# This is used to avoid attention on paddings.
input_mask_data = np.array([[1] * len(inst) + [0] *
(max_len - len(inst)) for inst in insts])
input_mask_data = np.expand_dims(input_mask_data, axis=-1)
return_list += [input_mask_data.astype("float32")]
if return_max_len:
return_list += [max_len]
if return_num_token:
num_token = 0
for inst in insts:
num_token += len(inst)
return_list += [num_token]
return return_list if len(return_list) > 1 else return_list[0]
if __name__ == "__main__":
pass
# -*- coding: UTF-8 -*-
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Mask, padding and batching."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from six.moves import xrange
def mask(batch_tokens,
seg_labels,
mask_word_tags,
total_token_num,
vocab_size,
CLS=1,
SEP=2,
MASK=3):
"""
Add mask for batch_tokens, return out, mask_label, mask_pos;
Note: mask_pos responding the batch_tokens after padded;
"""
max_len = max([len(sent) for sent in batch_tokens])
mask_label = []
mask_pos = []
prob_mask = np.random.rand(total_token_num)
# Note: the first token is [CLS], so [low=1]
replace_ids = np.random.randint(1, high=vocab_size, size=total_token_num)
pre_sent_len = 0
prob_index = 0
for sent_index, sent in enumerate(batch_tokens):
mask_flag = False
mask_word = mask_word_tags[sent_index]
prob_index += pre_sent_len
if mask_word:
beg = 0
for token_index, token in enumerate(sent):
seg_label = seg_labels[sent_index][token_index]
if seg_label == 1:
continue
if beg == 0:
if seg_label != -1:
beg = token_index
continue
prob = prob_mask[prob_index + beg]
if prob > 0.15:
pass
else:
for index in xrange(beg, token_index):
prob = prob_mask[prob_index + index]
base_prob = 1.0
if index == beg:
base_prob = 0.15
if base_prob * 0.2 < prob <= base_prob:
mask_label.append(sent[index])
sent[index] = MASK
mask_flag = True
mask_pos.append(sent_index * max_len + index)
elif base_prob * 0.1 < prob <= base_prob * 0.2:
mask_label.append(sent[index])
sent[index] = replace_ids[prob_index + index]
mask_flag = True
mask_pos.append(sent_index * max_len + index)
else:
mask_label.append(sent[index])
mask_pos.append(sent_index * max_len + index)
if seg_label == -1:
beg = 0
else:
beg = token_index
else:
for token_index, token in enumerate(sent):
prob = prob_mask[prob_index + token_index]
if prob > 0.15:
continue
elif 0.03 < prob <= 0.15:
# mask
if token != SEP and token != CLS:
mask_label.append(sent[token_index])
sent[token_index] = MASK
mask_flag = True
mask_pos.append(sent_index * max_len + token_index)
elif 0.015 < prob <= 0.03:
# random replace
if token != SEP and token != CLS:
mask_label.append(sent[token_index])
sent[token_index] = replace_ids[prob_index +
token_index]
mask_flag = True
mask_pos.append(sent_index * max_len + token_index)
else:
# keep the original token
if token != SEP and token != CLS:
mask_label.append(sent[token_index])
mask_pos.append(sent_index * max_len + token_index)
pre_sent_len = len(sent)
mask_label = np.array(mask_label).astype("int64").reshape([-1, 1])
mask_pos = np.array(mask_pos).astype("int64").reshape([-1, 1])
return batch_tokens, mask_label, mask_pos
def pad_batch_data(insts,
pad_idx=0,
return_pos=False,
return_input_mask=False,
return_max_len=False,
return_num_token=False,
return_seq_lens=False):
"""
Pad the instances to the max sequence length in batch, and generate the
corresponding position data and attention bias.
"""
return_list = []
max_len = max(len(inst) for inst in insts)
# Any token included in dict can be used to pad, since the paddings' loss
# will be masked out by weights and make no effect on parameter gradients.
inst_data = np.array(
[inst + list([pad_idx] * (max_len - len(inst))) for inst in insts])
return_list += [inst_data.astype("int64").reshape([-1, max_len, 1])]
# position data
if return_pos:
inst_pos = np.array([
list(range(0, len(inst))) + [pad_idx] * (max_len - len(inst))
for inst in insts
])
return_list += [inst_pos.astype("int64").reshape([-1, max_len, 1])]
if return_input_mask:
# This is used to avoid attention on paddings.
input_mask_data = np.array([[1] * len(inst) + [0] *
(max_len - len(inst)) for inst in insts])
input_mask_data = np.expand_dims(input_mask_data, axis=-1)
return_list += [input_mask_data.astype("float32")]
if return_max_len:
return_list += [max_len]
if return_num_token:
num_token = 0
for inst in insts:
num_token += len(inst)
return_list += [num_token]
if return_seq_lens:
seq_lens = np.array([len(inst) for inst in insts])
return_list += [seq_lens.astype("int64").reshape([-1, 1])]
return return_list if len(return_list) > 1 else return_list[0]
if __name__ == "__main__":
pass
# -*- coding: UTF-8 -*-
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Mask, padding and batching."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
def mask(batch_tokens, total_token_num, vocab_size, CLS=1, SEP=2, MASK=3):
"""
Add mask for batch_tokens, return out, mask_label, mask_pos;
Note: mask_pos responding the batch_tokens after padded;
"""
max_len = max([len(sent) for sent in batch_tokens])
mask_label = []
mask_pos = []
prob_mask = np.random.rand(total_token_num)
# Note: the first token is [CLS], so [low=1]
replace_ids = np.random.randint(1, high=vocab_size, size=total_token_num)
pre_sent_len = 0
prob_index = 0
for sent_index, sent in enumerate(batch_tokens):
mask_flag = False
prob_index += pre_sent_len
for token_index, token in enumerate(sent):
prob = prob_mask[prob_index + token_index]
if prob > 0.15:
continue
elif 0.03 < prob <= 0.15:
# mask
if token != SEP and token != CLS:
mask_label.append(sent[token_index])
sent[token_index] = MASK
mask_flag = True
mask_pos.append(sent_index * max_len + token_index)
elif 0.015 < prob <= 0.03:
# random replace
if token != SEP and token != CLS:
mask_label.append(sent[token_index])
sent[token_index] = replace_ids[prob_index + token_index]
mask_flag = True
mask_pos.append(sent_index * max_len + token_index)
else:
# keep the original token
if token != SEP and token != CLS:
mask_label.append(sent[token_index])
mask_pos.append(sent_index * max_len + token_index)
pre_sent_len = len(sent)
# ensure at least mask one word in a sentence
while not mask_flag:
token_index = int(np.random.randint(1, high=len(sent) - 1, size=1))
if sent[token_index] != SEP and sent[token_index] != CLS:
mask_label.append(sent[token_index])
sent[token_index] = MASK
mask_flag = True
mask_pos.append(sent_index * max_len + token_index)
mask_label = np.array(mask_label).astype("int64").reshape([-1, 1])
mask_pos = np.array(mask_pos).astype("int64").reshape([-1, 1])
return batch_tokens, mask_label, mask_pos
def prepare_batch_data(insts,
total_token_num,
max_len=None,
voc_size=0,
pad_id=None,
cls_id=None,
sep_id=None,
mask_id=None,
task_id=0,
return_input_mask=True,
return_max_len=True,
return_num_token=False):
"""
1. generate Tensor of data
2. generate Tensor of position
3. generate self attention mask, [shape: batch_size * max_len * max_len]
"""
batch_src_ids = [inst[0] for inst in insts]
batch_sent_ids = [inst[1] for inst in insts]
batch_pos_ids = [inst[2] for inst in insts]
# 这里是否应该反过来???否则在task layer里展开后的word embedding是padding后的,这时候word的index是跟没有padding时的index对不上的?
# First step: do mask without padding
out, mask_label, mask_pos = mask(
batch_src_ids,
total_token_num,
vocab_size=voc_size,
CLS=cls_id,
SEP=sep_id,
MASK=mask_id)
# Second step: padding
src_id, self_input_mask = pad_batch_data(
out,
max_len=max_len,
pad_idx=pad_id, return_input_mask=True)
pos_id = pad_batch_data(
batch_pos_ids,
max_len=max_len,
pad_idx=pad_id,
return_pos=False,
return_input_mask=False)
sent_id = pad_batch_data(
batch_sent_ids,
max_len=max_len,
pad_idx=pad_id,
return_pos=False,
return_input_mask=False)
task_ids = np.ones_like(
src_id, dtype="int64") * task_id
return_list = [
src_id, pos_id, sent_id, self_input_mask, task_ids, mask_label, mask_pos
]
return return_list if len(return_list) > 1 else return_list[0]
def pad_batch_data(insts,
max_len=None,
pad_idx=0,
return_pos=False,
return_input_mask=False,
return_max_len=False,
return_num_token=False):
"""
Pad the instances to the max sequence length in batch, and generate the
corresponding position data and input mask.
"""
return_list = []
if max_len is None:
max_len = max(len(inst) for inst in insts)
# Any token included in dict can be used to pad, since the paddings' loss
# will be masked out by weights and make no effect on parameter gradients.
inst_data = np.array([
list(inst) + list([pad_idx] * (max_len - len(inst))) for inst in insts
])
return_list += [inst_data.astype("int64").reshape([-1, max_len, 1])]
# position data
if return_pos:
inst_pos = np.array([
list(range(0, len(inst))) + [pad_idx] * (max_len - len(inst))
for inst in insts
])
return_list += [inst_pos.astype("int64").reshape([-1, max_len, 1])]
if return_input_mask:
# This is used to avoid attention on paddings.
input_mask_data = np.array([[1] * len(inst) + [0] *
(max_len - len(inst)) for inst in insts])
input_mask_data = np.expand_dims(input_mask_data, axis=-1)
return_list += [input_mask_data.astype("float32")]
if return_max_len:
return_list += [max_len]
if return_num_token:
num_token = 0
for inst in insts:
num_token += len(inst)
return_list += [num_token]
return return_list if len(return_list) > 1 else return_list[0]
if __name__ == "__main__":
pass
# -*- coding: UTF-8 -*-
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
class MRQAExample(object):
"""A single training/test example for simple sequence classification.
For examples without an answer, the start and end position are -1.
"""
def __init__(self,
qas_id,
question_text,
doc_tokens,
orig_answer_text=None,
start_position=None,
end_position=None,
is_impossible=False):
self.qas_id = qas_id
self.question_text = question_text
self.doc_tokens = doc_tokens
self.orig_answer_text = orig_answer_text
self.start_position = start_position
self.end_position = end_position
self.is_impossible = is_impossible
def __str__(self):
return self.__repr__()
def __repr__(self):
s = ""
s += "qas_id: %s" % (tokenization.printable_text(self.qas_id))
s += ", question_text: %s" % (
tokenization.printable_text(self.question_text))
s += ", doc_tokens: [%s]" % (" ".join(self.doc_tokens))
if self.start_position:
s += ", start_position: %d" % (self.start_position)
if self.start_position:
s += ", end_position: %d" % (self.end_position)
if self.start_position:
s += ", is_impossible: %r" % (self.is_impossible)
return s
class MRQAFeature(object):
"""A single set of features of data."""
def __init__(self,
unique_id,
example_index,
doc_span_index,
tokens,
token_to_orig_map,
token_is_max_context,
input_ids,
input_mask,
segment_ids,
start_position=None,
end_position=None,
is_impossible=None):
self.unique_id = unique_id
self.example_index = example_index
self.doc_span_index = doc_span_index
self.tokens = tokens
self.token_to_orig_map = token_to_orig_map
self.token_is_max_context = token_is_max_context
self.input_ids = input_ids
self.input_mask = input_mask
self.segment_ids = segment_ids
self.start_position = start_position
self.end_position = end_position
self.is_impossible = is_impossible
# -*- coding: UTF-8 -*-
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from __future__ import absolute_import
import sys
import os
import json
import random
import logging
import numpy as np
import six
from io import open
from collections import namedtuple
import paddlepalm.tokenizer.ernie_tokenizer as tokenization
from paddlepalm.reader.utils.batching4ernie import pad_batch_data
from paddlepalm.reader.utils.mlm_batching import prepare_batch_data
log = logging.getLogger(__name__)
if six.PY3:
import io
sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8')
sys.stderr = io.TextIOWrapper(sys.stderr.buffer, encoding='utf-8')
def csv_reader(fd, delimiter='\t'):
def gen():
for i in fd:
yield i.rstrip('\n').split(delimiter)
return gen()
class BaseReader(object):
def __init__(self,
vocab_path,
label_map_config=None,
max_seq_len=512,
do_lower_case=True,
in_tokens=False,
is_inference=False,
random_seed=None,
tokenizer="FullTokenizer",
is_classify=True,
is_regression=False,
for_cn=True,
task_id=0):
self.max_seq_len = max_seq_len
self.tokenizer = tokenization.FullTokenizer(
vocab_file=vocab_path, do_lower_case=do_lower_case)
self.vocab = self.tokenizer.vocab
self.pad_id = self.vocab["[PAD]"]
self.cls_id = self.vocab["[CLS]"]
self.sep_id = self.vocab["[SEP]"]
self.mask_id = self.vocab["[MASK]"]
self.in_tokens = in_tokens
self.is_inference = is_inference
self.for_cn = for_cn
self.task_id = task_id
np.random.seed(random_seed)
self.is_classify = is_classify
self.is_regression = is_regression
self.current_example = 0
self.current_epoch = 0
self.num_examples = 0
self.examples = {}
if label_map_config:
with open(label_map_config, encoding='utf8') as f:
self.label_map = json.load(f)
else:
self.label_map = None
def get_train_progress(self):
"""Gets progress for training phase."""
return self.current_example, self.current_epoch
def _read_tsv(self, input_file, quotechar=None):
"""Reads a tab separated value file."""
with open(input_file, 'r', encoding='utf8') as f:
reader = csv_reader(f)
headers = next(reader)
Example = namedtuple('Example', headers)
examples = []
for line in reader:
example = Example(*line)
examples.append(example)
return examples
def _truncate_seq_pair(self, tokens_a, tokens_b, max_length):
"""Truncates a sequence pair in place to the maximum length."""
# This is a simple heuristic which will always truncate the longer sequence
# one token at a time. This makes more sense than truncating an equal percent
# of tokens from each, since if one sequence is very short then each token
# that's truncated likely contains more information than a longer sequence.
while True:
total_length = len(tokens_a) + len(tokens_b)
if total_length <= max_length:
break
if len(tokens_a) > len(tokens_b):
tokens_a.pop()
else:
tokens_b.pop()
def _convert_example_to_record(self, example, max_seq_length, tokenizer):
"""Converts a single `Example` into a single `Record`."""
text_a = tokenization.convert_to_unicode(example.text_a)
tokens_a = tokenizer.tokenize(text_a)
tokens_b = None
has_text_b = False
if isinstance(example, dict):
has_text_b = "text_b" in example.keys()
else:
has_text_b = "text_b" in example._fields
if has_text_b:
text_b = tokenization.convert_to_unicode(example.text_b)
tokens_b = tokenizer.tokenize(text_b)
if tokens_b:
# Modifies `tokens_a` and `tokens_b` in place so that the total
# length is less than the specified length.
# Account for [CLS], [SEP], [SEP] with "- 3"
self._truncate_seq_pair(tokens_a, tokens_b, max_seq_length - 3)
else:
# Account for [CLS] and [SEP] with "- 2"
if len(tokens_a) > max_seq_length - 2:
tokens_a = tokens_a[0:(max_seq_length - 2)]
# The convention in BERT/ERNIE is:
# (a) For sequence pairs:
# tokens: [CLS] is this jack ##son ##ville ? [SEP] no it is not . [SEP]
# type_ids: 0 0 0 0 0 0 0 0 1 1 1 1 1 1
# (b) For single sequences:
# tokens: [CLS] the dog is hairy . [SEP]
# type_ids: 0 0 0 0 0 0 0
#
# Where "type_ids" are used to indicate whether this is the first
# sequence or the second sequence. The embedding vectors for `type=0` and
# `type=1` were learned during pre-training and are added to the wordpiece
# embedding vector (and position vector). This is not *strictly* necessary
# since the [SEP] token unambiguously separates the sequences, but it makes
# it easier for the model to learn the concept of sequences.
#
# For classification tasks, the first vector (corresponding to [CLS]) is
# used as as the "sentence vector". Note that this only makes sense because
# the entire model is fine-tuned.
tokens = []
text_type_ids = []
tokens.append("[CLS]")
text_type_ids.append(0)
for token in tokens_a:
tokens.append(token)
text_type_ids.append(0)
tokens.append("[SEP]")
text_type_ids.append(0)
if tokens_b:
for token in tokens_b:
tokens.append(token)
text_type_ids.append(1)
tokens.append("[SEP]")
text_type_ids.append(1)
token_ids = tokenizer.convert_tokens_to_ids(tokens)
position_ids = list(range(len(token_ids)))
if self.is_inference:
Record = namedtuple('Record',
['token_ids', 'text_type_ids', 'position_ids'])
record = Record(
token_ids=token_ids,
text_type_ids=text_type_ids,
position_ids=position_ids)
else:
if self.label_map:
label_id = self.label_map[example.label]
else:
label_id = example.label
Record = namedtuple('Record', [
'token_ids', 'text_type_ids', 'position_ids', 'label_id', 'qid'
])
qid = None
if "qid" in example._fields:
qid = example.qid
record = Record(
token_ids=token_ids,
text_type_ids=text_type_ids,
position_ids=position_ids,
label_id=label_id,
qid=qid)
return record
def _prepare_batch_data(self, examples, batch_size, phase=None):
"""generate batch records"""
batch_records, max_len = [], 0
if len(examples) < batch_size:
raise Exception('CLS dataset contains too few samples. Expect more than '+str(batch_size))
for index, example in enumerate(examples):
if phase == "train":
self.current_example = index
record = self._convert_example_to_record(example, self.max_seq_len,
self.tokenizer)
max_len = max(max_len, len(record.token_ids))
if self.in_tokens:
to_append = (len(batch_records) + 1) * max_len <= batch_size
else:
to_append = len(batch_records) < batch_size
if to_append:
batch_records.append(record)
else:
yield self._pad_batch_records(batch_records)
batch_records, max_len = [record], len(record.token_ids)
if phase == 'pred' and batch_records:
yield self._pad_batch_records(batch_records)
def get_num_examples(self, input_file=None, phase=None):
if self.examples is not None:
if phase is None:
phase = 'all'
return len(self.examples[phase])
else:
assert input_file is not None, "Argument input_file should be given or the data_generator should be created when this func is called."
examples = self._read_tsv(input_file)
return len(examples)
def data_generator(self,
input_file,
batch_size,
epoch,
dev_count=1,
shuffle=True,
phase=None):
examples = self._read_tsv(input_file)
if phase is None:
phase = 'all'
self.examples[phase] = examples
def wrapper():
all_dev_batches = []
if epoch is None:
num_epochs = 99999999
else:
num_epochs = epoch
for epoch_index in range(num_epochs):
if phase == "train":
self.current_example = 0
self.current_epoch = epoch_index
if shuffle:
np.random.shuffle(examples)
for batch_data in self._prepare_batch_data(
examples, batch_size, phase=phase):
if len(all_dev_batches) < dev_count:
all_dev_batches.append(batch_data)
if len(all_dev_batches) == dev_count:
for batch in all_dev_batches:
yield batch
all_dev_batches = []
def f():
for i in wrapper():
yield i
# def f():
# try:
# for i in wrapper():
# yield i
# except Exception as e:
# import traceback
# traceback.print_exc()
return f
class MaskLMReader(BaseReader):
def _convert_example_to_record(self, example, max_seq_length, tokenizer):
"""Converts a single `Example` into a single `Record`."""
text_a = tokenization.convert_to_unicode(example.text_a)
tokens_a = tokenizer.tokenize(text_a)
tokens_b = None
has_text_b = False
if isinstance(example, dict):
has_text_b = "text_b" in example.keys()
else:
has_text_b = "text_b" in example._fields
if has_text_b:
text_b = tokenization.convert_to_unicode(example.text_b)
tokens_b = tokenizer.tokenize(text_b)
if tokens_b:
# Modifies `tokens_a` and `tokens_b` in place so that the total
# length is less than the specified length.
# Account for [CLS], [SEP], [SEP] with "- 3"
self._truncate_seq_pair(tokens_a, tokens_b, max_seq_length - 3)
else:
# Account for [CLS] and [SEP] with "- 2"
if len(tokens_a) > max_seq_length - 2:
tokens_a = tokens_a[0:(max_seq_length - 2)]
# The convention in BERT/ERNIE is:
# (a) For sequence pairs:
# tokens: [CLS] is this jack ##son ##ville ? [SEP] no it is not . [SEP]
# type_ids: 0 0 0 0 0 0 0 0 1 1 1 1 1 1
# (b) For single sequences:
# tokens: [CLS] the dog is hairy . [SEP]
# type_ids: 0 0 0 0 0 0 0
#
# Where "type_ids" are used to indicate whether this is the first
# sequence or the second sequence. The embedding vectors for `type=0` and
# `type=1` were learned during pre-training and are added to the wordpiece
# embedding vector (and position vector). This is not *strictly* necessary
# since the [SEP] token unambiguously separates the sequences, but it makes
# it easier for the model to learn the concept of sequences.
#
# For classification tasks, the first vector (corresponding to [CLS]) is
# used as as the "sentence vector". Note that this only makes sense because
# the entire model is fine-tuned.
tokens = []
text_type_ids = []
tokens.append("[CLS]")
text_type_ids.append(0)
for token in tokens_a:
tokens.append(token)
text_type_ids.append(0)
tokens.append("[SEP]")
text_type_ids.append(0)
if tokens_b:
for token in tokens_b:
tokens.append(token)
text_type_ids.append(1)
tokens.append("[SEP]")
text_type_ids.append(1)
token_ids = tokenizer.convert_tokens_to_ids(tokens)
position_ids = list(range(len(token_ids)))
# Record = namedtuple('Record',
# ['token_ids', 'text_type_ids', 'position_ids'])
# record = Record(
# token_ids=token_ids,
# text_type_ids=text_type_ids,
# position_ids=position_ids)
return [token_ids, text_type_ids, position_ids]
def batch_reader(self, examples, batch_size, in_tokens, phase):
batch = []
total_token_num = 0
if len(examples) < batch_size:
raise Exception('MaskLM dataset contains too few samples. Expect more than '+str(batch_size))
for e in examples:
parsed_line = self._convert_example_to_record(e, self.max_seq_len, self.tokenizer)
to_append = len(batch) < batch_size
if to_append:
batch.append(parsed_line)
total_token_num += len(parsed_line[0])
else:
yield batch, total_token_num
batch = [parsed_line]
total_token_num = len(parsed_line[0])
if len(batch) > 0 and phase == 'pred':
yield batch, total_token_num
def data_generator(self,
input_file,
batch_size,
epoch,
dev_count=1,
shuffle=True,
phase=None):
examples = self._read_tsv(input_file)
if phase is None:
phase = 'all'
self.examples[phase] = examples
def wrapper():
all_dev_batches = []
if epoch is None:
num_epochs = 99999999
else:
num_epochs = epoch
for epoch_index in range(num_epochs):
if phase == "train":
self.current_example = 0
self.current_epoch = epoch_index
if shuffle:
np.random.shuffle(examples)
all_dev_batches = []
for batch_data, num_tokens in self.batch_reader(examples,
batch_size, self.in_tokens, phase=phase):
batch_data = prepare_batch_data(
batch_data,
num_tokens,
voc_size=len(self.vocab),
pad_id=self.pad_id,
cls_id=self.cls_id,
sep_id=self.sep_id,
mask_id=self.mask_id,
# max_len=self.max_seq_len, # 注意,如果padding到最大长度,会导致mask_pos与实际位置不对应。因为mask pos是基于batch内最大长度来计算的。
return_input_mask=True,
return_max_len=False,
return_num_token=False)
if len(all_dev_batches) < dev_count:
all_dev_batches.append(batch_data)
if len(all_dev_batches) == dev_count:
for batch in all_dev_batches:
yield batch
all_dev_batches = []
return wrapper
class ClassifyReader(BaseReader):
def _read_tsv(self, input_file, quotechar=None):
"""Reads a tab separated value file."""
with open(input_file, 'r', encoding='utf8') as f:
reader = csv_reader(f)
headers = next(reader)
text_indices = [
index for index, h in enumerate(headers) if h != "label"
]
Example = namedtuple('Example', headers)
examples = []
for line in reader:
for index, text in enumerate(line):
if index in text_indices:
if self.for_cn:
line[index] = text.replace(' ', '')
else:
line[index] = text
example = Example(*line)
examples.append(example)
return examples
def _pad_batch_records(self, batch_records):
batch_token_ids = [record.token_ids for record in batch_records]
batch_text_type_ids = [record.text_type_ids for record in batch_records]
batch_position_ids = [record.position_ids for record in batch_records]
if not self.is_inference:
batch_labels = [record.label_id for record in batch_records]
if self.is_classify:
batch_labels = np.array(batch_labels).astype("int64").reshape(
[-1, 1])
elif self.is_regression:
batch_labels = np.array(batch_labels).astype("float32").reshape(
[-1, 1])
if batch_records[0].qid:
batch_qids = [record.qid for record in batch_records]
batch_qids = np.array(batch_qids).astype("int64").reshape(
[-1, 1])
else:
batch_qids = np.array([]).astype("int64").reshape([-1, 1])
# padding
padded_token_ids, input_mask = pad_batch_data(
batch_token_ids, pad_idx=self.pad_id, return_input_mask=True)
padded_text_type_ids = pad_batch_data(
batch_text_type_ids, pad_idx=self.pad_id)
padded_position_ids = pad_batch_data(
batch_position_ids, pad_idx=self.pad_id)
padded_task_ids = np.ones_like(
padded_token_ids, dtype="int64") * self.task_id
return_list = [
padded_token_ids, padded_text_type_ids, padded_position_ids,
padded_task_ids, input_mask
]
if not self.is_inference:
return_list += [batch_labels, batch_qids]
return return_list
class SequenceLabelReader(BaseReader):
def _pad_batch_records(self, batch_records):
batch_token_ids = [record.token_ids for record in batch_records]
batch_text_type_ids = [record.text_type_ids for record in batch_records]
batch_position_ids = [record.position_ids for record in batch_records]
batch_label_ids = [record.label_ids for record in batch_records]
# padding
padded_token_ids, input_mask, batch_seq_lens = pad_batch_data(
batch_token_ids,
pad_idx=self.pad_id,
return_input_mask=True,
return_seq_lens=True)
padded_text_type_ids = pad_batch_data(
batch_text_type_ids, pad_idx=self.pad_id)
padded_position_ids = pad_batch_data(
batch_position_ids, pad_idx=self.pad_id)
padded_label_ids = pad_batch_data(
batch_label_ids, pad_idx=len(self.label_map) - 1)
padded_task_ids = np.ones_like(
padded_token_ids, dtype="int64") * self.task_id
return_list = [
padded_token_ids, padded_text_type_ids, padded_position_ids,
padded_task_ids, input_mask, padded_label_ids, batch_seq_lens
]
return return_list
def _reseg_token_label(self, tokens, labels, tokenizer):
assert len(tokens) == len(labels)
ret_tokens = []
ret_labels = []
for token, label in zip(tokens, labels):
sub_token = tokenizer.tokenize(token)
if len(sub_token) == 0:
continue
ret_tokens.extend(sub_token)
if len(sub_token) == 1:
ret_labels.append(label)
continue
if label == "O" or label.startswith("I-"):
ret_labels.extend([label] * len(sub_token))
elif label.startswith("B-"):
i_label = "I-" + label[2:]
ret_labels.extend([label] + [i_label] * (len(sub_token) - 1))
elif label.startswith("S-"):
b_laebl = "B-" + label[2:]
e_label = "E-" + label[2:]
i_label = "I-" + label[2:]
ret_labels.extend([b_laebl] + [i_label] * (len(sub_token) - 2) + [e_label])
elif label.startswith("E-"):
i_label = "I-" + label[2:]
ret_labels.extend([i_label] * (len(sub_token) - 1) + [label])
assert len(ret_tokens) == len(ret_labels)
return ret_tokens, ret_labels
def _convert_example_to_record(self, example, max_seq_length, tokenizer):
tokens = tokenization.convert_to_unicode(example.text_a).split(u"")
labels = tokenization.convert_to_unicode(example.label).split(u"")
tokens, labels = self._reseg_token_label(tokens, labels, tokenizer)
if len(tokens) > max_seq_length - 2:
tokens = tokens[0:(max_seq_length - 2)]
labels = labels[0:(max_seq_length - 2)]
tokens = ["[CLS]"] + tokens + ["[SEP]"]
token_ids = tokenizer.convert_tokens_to_ids(tokens)
position_ids = list(range(len(token_ids)))
text_type_ids = [0] * len(token_ids)
no_entity_id = len(self.label_map) - 1
label_ids = [no_entity_id] + [
self.label_map[label] for label in labels
] + [no_entity_id]
Record = namedtuple(
'Record',
['token_ids', 'text_type_ids', 'position_ids', 'label_ids'])
record = Record(
token_ids=token_ids,
text_type_ids=text_type_ids,
position_ids=position_ids,
label_ids=label_ids)
return record
class ExtractEmbeddingReader(BaseReader):
def _pad_batch_records(self, batch_records):
batch_token_ids = [record.token_ids for record in batch_records]
batch_text_type_ids = [record.text_type_ids for record in batch_records]
batch_position_ids = [record.position_ids for record in batch_records]
# padding
padded_token_ids, input_mask, seq_lens = pad_batch_data(
batch_token_ids,
pad_idx=self.pad_id,
return_input_mask=True,
return_seq_lens=True)
padded_text_type_ids = pad_batch_data(
batch_text_type_ids, pad_idx=self.pad_id)
padded_position_ids = pad_batch_data(
batch_position_ids, pad_idx=self.pad_id)
padded_task_ids = np.ones_like(
padded_token_ids, dtype="int64") * self.task_id
return_list = [
padded_token_ids, padded_text_type_ids, padded_position_ids,
padded_task_ids, input_mask, seq_lens
]
return return_list
class MRCReader(BaseReader):
def __init__(self,
vocab_path,
label_map_config=None,
max_seq_len=512,
do_lower_case=True,
in_tokens=False,
random_seed=None,
tokenizer="FullTokenizer",
is_classify=True,
is_regression=False,
for_cn=True,
task_id=0,
doc_stride=128,
max_query_length=64,
remove_noanswer=True):
self.max_seq_len = max_seq_len
self.tokenizer = tokenization.FullTokenizer(
vocab_file=vocab_path, do_lower_case=do_lower_case)
self.vocab = self.tokenizer.vocab
self.pad_id = self.vocab["[PAD]"]
self.cls_id = self.vocab["[CLS]"]
self.sep_id = self.vocab["[SEP]"]
self.in_tokens = in_tokens
self.for_cn = for_cn
self.task_id = task_id
self.doc_stride = doc_stride
self.max_query_length = max_query_length
self.examples = {}
self.features = {}
self.remove_noanswer = remove_noanswer
if random_seed is not None:
np.random.seed(random_seed)
self.current_example = 0
self.current_epoch = 0
self.num_examples = 0
self.Example = namedtuple('Example',
['qas_id', 'question_text', 'doc_tokens', 'orig_answer_text',
'start_position', 'end_position'])
self.Feature = namedtuple("Feature", ["unique_id", "example_index", "doc_span_index",
"tokens", "token_to_orig_map", "token_is_max_context",
"token_ids", "position_ids", "text_type_ids",
"start_position", "end_position"])
self.DocSpan = namedtuple("DocSpan", ["start", "length"])
def _read_json(self, input_file, is_training):
examples = []
with open(input_file, "r", encoding='utf8') as f:
input_data = json.load(f)["data"]
for entry in input_data:
for paragraph in entry["paragraphs"]:
paragraph_text = paragraph["context"]
for qa in paragraph["qas"]:
qas_id = qa["id"]
question_text = qa["question"]
start_pos = None
end_pos = None
orig_answer_text = None
if is_training:
if len(qa["answers"]) != 1:
raise ValueError(
"For training, each question should have exactly 1 answer."
)
answer = qa["answers"][0]
orig_answer_text = answer["text"]
answer_offset = answer["answer_start"]
answer_length = len(orig_answer_text)
doc_tokens = [
paragraph_text[:answer_offset],
paragraph_text[answer_offset:answer_offset +
answer_length],
paragraph_text[answer_offset + answer_length:]
]
start_pos = 1
end_pos = 1
actual_text = " ".join(doc_tokens[start_pos:(end_pos
+ 1)])
if actual_text.find(orig_answer_text) == -1:
log.info("Could not find answer: '%s' vs. '%s'",
actual_text, orig_answer_text)
continue
else:
doc_tokens = tokenization.tokenize_chinese_chars(
paragraph_text)
example = self.Example(
qas_id=qas_id,
question_text=question_text,
doc_tokens=doc_tokens,
orig_answer_text=orig_answer_text,
start_position=start_pos,
end_position=end_pos)
examples.append(example)
return examples
def _improve_answer_span(self, doc_tokens, input_start, input_end,
tokenizer, orig_answer_text):
tok_answer_text = " ".join(tokenizer.tokenize(orig_answer_text))
for new_start in range(input_start, input_end + 1):
for new_end in range(input_end, new_start - 1, -1):
text_span = " ".join(doc_tokens[new_start:(new_end + 1)])
if text_span == tok_answer_text:
return (new_start, new_end)
return (input_start, input_end)
def _check_is_max_context(self, doc_spans, cur_span_index, position):
best_score = None
best_span_index = None
for (span_index, doc_span) in enumerate(doc_spans):
end = doc_span.start + doc_span.length - 1
if position < doc_span.start:
continue
if position > end:
continue
num_left_context = position - doc_span.start
num_right_context = end - position
score = min(num_left_context,
num_right_context) + 0.01 * doc_span.length
if best_score is None or score > best_score:
best_score = score
best_span_index = span_index
return cur_span_index == best_span_index
def _convert_example_to_feature(self, examples, max_seq_length, tokenizer,
is_training, remove_noanswer=True):
features = []
unique_id = 1000000000
print('converting examples to features...')
for (example_index, example) in enumerate(examples):
if example_index % 1000 == 0:
print('processing {}th example...'.format(example_index))
query_tokens = tokenizer.tokenize(example.question_text)
if len(query_tokens) > self.max_query_length:
query_tokens = query_tokens[0:self.max_query_length]
tok_to_orig_index = []
orig_to_tok_index = []
all_doc_tokens = []
for (i, token) in enumerate(example.doc_tokens):
orig_to_tok_index.append(len(all_doc_tokens))
sub_tokens = tokenizer.tokenize(token)
for sub_token in sub_tokens:
tok_to_orig_index.append(i)
all_doc_tokens.append(sub_token)
tok_start_position = None
tok_end_position = None
if is_training:
tok_start_position = orig_to_tok_index[example.start_position]
if example.end_position < len(example.doc_tokens) - 1:
tok_end_position = orig_to_tok_index[example.end_position +
1] - 1
else:
tok_end_position = len(all_doc_tokens) - 1
(tok_start_position,
tok_end_position) = self._improve_answer_span(
all_doc_tokens, tok_start_position, tok_end_position,
tokenizer, example.orig_answer_text)
max_tokens_for_doc = max_seq_length - len(query_tokens) - 3
doc_spans = []
start_offset = 0
while start_offset < len(all_doc_tokens):
length = len(all_doc_tokens) - start_offset
if length > max_tokens_for_doc:
length = max_tokens_for_doc
doc_spans.append(self.DocSpan(start=start_offset, length=length))
if start_offset + length == len(all_doc_tokens):
break
start_offset += min(length, self.doc_stride)
for (doc_span_index, doc_span) in enumerate(doc_spans):
tokens = []
token_to_orig_map = {}
token_is_max_context = {}
text_type_ids = []
tokens.append("[CLS]")
text_type_ids.append(0)
for token in query_tokens:
tokens.append(token)
text_type_ids.append(0)
tokens.append("[SEP]")
text_type_ids.append(0)
for i in range(doc_span.length):
split_token_index = doc_span.start + i
token_to_orig_map[len(tokens)] = tok_to_orig_index[
split_token_index]
is_max_context = self._check_is_max_context(
doc_spans, doc_span_index, split_token_index)
token_is_max_context[len(tokens)] = is_max_context
tokens.append(all_doc_tokens[split_token_index])
text_type_ids.append(1)
tokens.append("[SEP]")
text_type_ids.append(1)
token_ids = tokenizer.convert_tokens_to_ids(tokens)
position_ids = list(range(len(token_ids)))
start_position = None
end_position = None
if is_training:
doc_start = doc_span.start
doc_end = doc_span.start + doc_span.length - 1
out_of_span = False
if not (tok_start_position >= doc_start and
tok_end_position <= doc_end):
out_of_span = True
if out_of_span:
start_position = 0
end_position = 0
if remove_noanswer:
continue
else:
doc_offset = len(query_tokens) + 2
start_position = tok_start_position - doc_start + doc_offset
end_position = tok_end_position - doc_start + doc_offset
feature = self.Feature(
unique_id=unique_id,
example_index=example_index,
doc_span_index=doc_span_index,
tokens=tokens,
token_to_orig_map=token_to_orig_map,
token_is_max_context=token_is_max_context,
token_ids=token_ids,
position_ids=position_ids,
text_type_ids=text_type_ids,
start_position=start_position,
end_position=end_position)
features.append(feature)
unique_id += 1
return features
def _prepare_batch_data(self, records, batch_size, phase=None):
"""generate batch records"""
batch_records, max_len = [], 0
if len(records) < batch_size:
raise Exception('mrc dataset contains too few samples. Expect more than '+str(batch_size))
for index, record in enumerate(records):
if phase == "train":
self.current_example = index
max_len = max(max_len, len(record.token_ids))
if self.in_tokens:
to_append = (len(batch_records) + 1) * max_len <= batch_size
else:
to_append = len(batch_records) < batch_size
if to_append:
batch_records.append(record)
else:
yield self._pad_batch_records(batch_records, phase == "train")
batch_records, max_len = [record], len(record.token_ids)
if phase == 'pred' and batch_records:
yield self._pad_batch_records(batch_records, phase == "train")
def _pad_batch_records(self, batch_records, is_training):
batch_token_ids = [record.token_ids for record in batch_records]
batch_text_type_ids = [record.text_type_ids for record in batch_records]
batch_position_ids = [record.position_ids for record in batch_records]
if is_training:
batch_start_position = [
record.start_position for record in batch_records
]
batch_end_position = [
record.end_position for record in batch_records
]
batch_start_position = np.array(batch_start_position).astype(
"int64").reshape([-1, 1])
batch_end_position = np.array(batch_end_position).astype(
"int64").reshape([-1, 1])
else:
batch_size = len(batch_token_ids)
batch_start_position = np.zeros(
shape=[batch_size, 1], dtype="int64")
batch_end_position = np.zeros(shape=[batch_size, 1], dtype="int64")
batch_unique_ids = [record.unique_id for record in batch_records]
batch_unique_ids = np.array(batch_unique_ids).astype("int64").reshape(
[-1, 1])
# padding
padded_token_ids, input_mask = pad_batch_data(
batch_token_ids, pad_idx=self.pad_id, return_input_mask=True)
padded_text_type_ids = pad_batch_data(
batch_text_type_ids, pad_idx=self.pad_id)
padded_position_ids = pad_batch_data(
batch_position_ids, pad_idx=self.pad_id)
padded_task_ids = np.ones_like(
padded_token_ids, dtype="int64") * self.task_id
return_list = [
padded_token_ids, padded_text_type_ids, padded_position_ids,
padded_task_ids, input_mask, batch_start_position,
batch_end_position, batch_unique_ids
]
return return_list
def get_num_examples(self, phase):
return len(self.features[phase])
def get_features(self, phase):
return self.features[phase]
def get_examples(self, phase):
return self.examples[phase]
def data_generator(self,
input_file,
batch_size,
epoch,
dev_count=1,
shuffle=True,
phase=None):
examples = self.examples.get(phase, None)
features = self.features.get(phase, None)
if not examples:
examples = self._read_json(input_file, phase == "train")
features = self._convert_example_to_feature(
examples, self.max_seq_len, self.tokenizer, phase == "train", remove_noanswer=self.remove_noanswer)
self.examples[phase] = examples
self.features[phase] = features
def wrapper():
all_dev_batches = []
if epoch is None:
num_epochs = 99999999
else:
num_epochs = epoch
for epoch_index in range(num_epochs):
if phase == "train":
self.current_example = 0
self.current_epoch = epoch_index
if phase == "train" and shuffle:
np.random.shuffle(features)
for batch_data in self._prepare_batch_data(
features, batch_size, phase=phase):
if len(all_dev_batches) < dev_count:
all_dev_batches.append(batch_data)
if len(all_dev_batches) == dev_count:
for batch in all_dev_batches:
yield batch
all_dev_batches = []
return wrapper
if __name__ == '__main__':
pass
#!/bin/sh
if [[ $# != 1 ]]; then
echo "usage: bash convert_params.sh <params_dir>"
exit 1
fi
if [[ -f $1/__palminfo__ ]]; then
echo "already converted."
exit 0
fi
echo "converting..."
if [[ -d $1/params ]]; then
cd $1/params
else
cd $1
fi
mkdir .palm.backup
for file in $(ls *)
do cp $file .palm.backup; mv $file "__paddlepalm_"$file
done
tar -cf __rawmodel__ .palm.backup/*
rm .palm.backup/*
mv __rawmodel__ .palm.backup
# find . ! -name '__rawmodel__' -exec rm {} +
tar -cf __palmmodel__ __paddlepalm_*
touch __palminfo__
ls __paddlepalm_* > __palminfo__
rm __paddlepalm_*
cd - >/dev/null
echo "done!"
#!/bin/bash
set -e
if [[ $# != 1 ]]; then
echo "Usage: bash download_pretrain.sh <bert|ernie>"
exit 1
fi
if [[ $1 == 'bert' ]]; then
name="bert"
link="https://bert-models.bj.bcebos.com/uncased_L-24_H-1024_A-16.tar.gz"
packname="uncased_L-24_H-1024_A-16.tar.gz"
dirname="uncased_L-24_H-1024_A-16"
elif [[ $1 == 'ernie' ]]; then
name="ernie"
link="https://ernie.bj.bcebos.com/ERNIE_Large_en_stable-2.0.0.tar.gz"
packname="ERNIE_Large_en_stable-2.0.0.tar.gz"
else
echo "$1 is currently not supported."
exit 1
fi
if [[ ! -d pretrain_model ]]; then
mkdir pretrain_model
fi
cd pretrain_model
mkdir $name
cd $name
echo "downloading ${name}..."
wget --no-check-certificate $link
echo "decompressing..."
tar -zxf $packname
rm -rf $packname
if [[ $dirname != "" ]]; then
mv $dirname/* .
rm -rf $dirname
fi
cd ../..
#!/bin/sh
if [[ $# != 1 ]]; then
echo "usage: bash recover_params.sh <params_dir>"
exit 1
fi
if [[ ! -d $1 ]]; then
echo "$1 not found."
exit 1
fi
if [[ ! -f $1/__palmmodel__ ]]; then
echo "paddlepalm model not found."
exit 1
fi
echo "recovering..."
if [[ -d $1/params ]]; then
cd $1/params
else
cd $1
fi
rm __palm*
mv .palm.backup/__rawmodel__ .
rm -rf .palm.backup
tar -xf __rawmodel__
mv .palm.backup/* .
rm __rawmodel__
rm -rf .palm.backup
cd - >/dev/null
[metadata] [metadata]
name = paddle-palm name = paddlepalm
author = zhangyiming author = zhangyiming
author_email = zhangyiming04@baidu.com author_email = zhangyiming04@baidu.com
version = 1.2 version = 1.0.0
description = Paddle-PALM description = PaddlePALM
long_description = file: README.md long_description = file: README.md
long_description_content_type = text/markdown long_description_content_type = text/markdown
...@@ -27,6 +27,8 @@ classifier = ...@@ -27,6 +27,8 @@ classifier =
keywords = keywords =
paddlepaddle paddlepaddle
paddle paddle
nlp
pretrain
multi-task-learning multi-task-learning
[options] [options]
......
...@@ -18,7 +18,7 @@ ...@@ -18,7 +18,7 @@
""" """
Setup script. Setup script.
Authors: zhouxiangyang(zhouxiangyang@baidu.com) Authors: zhouxiangyang(zhouxiangyang@baidu.com)
Date: 2019/09/29 21:00:01 Date: 2020/1/22 12:00:01
""" """
import setuptools import setuptools
with open("README.md", "r") as fh: with open("README.md", "r") as fh:
...@@ -28,10 +28,10 @@ setuptools.setup( ...@@ -28,10 +28,10 @@ setuptools.setup(
version="1.0.0", version="1.0.0",
author="PaddlePaddle", author="PaddlePaddle",
author_email="zhangyiming04@baidu.com", author_email="zhangyiming04@baidu.com",
description="A Multi-task Learning Lib for PaddlePaddle Users.", description="a flexible, general and easy-to-use NLP large-scale pretraining and multi-task learning framework.",
long_description=long_description, # long_description=long_description,
long_description_content_type="text/markdown", # long_description_content_type="text/markdown",
url="https://github.com/PaddlePadd", url="https://github.com/PaddlePaddle/PALM",
# packages=setuptools.find_packages(), # packages=setuptools.find_packages(),
packages = ['paddlepalm', packages = ['paddlepalm',
'paddlepalm.backbone', 'paddlepalm.backbone',
...@@ -39,16 +39,20 @@ setuptools.setup( ...@@ -39,16 +39,20 @@ setuptools.setup(
'paddlepalm.optimizer', 'paddlepalm.optimizer',
'paddlepalm.reader', 'paddlepalm.reader',
'paddlepalm.reader.utils', 'paddlepalm.reader.utils',
'paddlepalm.task_paradigm', 'paddlepalm.head',
'paddlepalm.distribute',
'paddlepalm.lr_sched',
'paddlepalm.tokenizer', 'paddlepalm.tokenizer',
'paddlepalm.utils'], 'paddlepalm.utils'],
package_dir={'paddlepalm':'./paddlepalm', package_dir={'paddlepalm':'./paddlepalm',
'paddlepalm.backbone':'./paddlepalm/backbone', 'paddlepalm.backbone':'./paddlepalm/backbone',
'paddlepalm.backbone.utils':'./paddlepalm/backbone/utils', 'paddlepalm.backbone.utils':'./paddlepalm/backbone/utils',
'paddlepalm.optimizer':'./paddlepalm/optimizer', 'paddlepalm.optimizer':'./paddlepalm/optimizer',
'paddlepalm.lr_sched': './paddlepalm/lr_sched',
'paddlepalm.distribute': './paddlepalm/distribute',
'paddlepalm.reader':'./paddlepalm/reader', 'paddlepalm.reader':'./paddlepalm/reader',
'paddlepalm.reader.utils':'./paddlepalm/reader/utils', 'paddlepalm.reader.utils':'./paddlepalm/reader/utils',
'paddlepalm.task_paradigm':'./paddlepalm/task_paradigm', 'paddlepalm.head':'./paddlepalm/head',
'paddlepalm.tokenizer':'./paddlepalm/tokenizer', 'paddlepalm.tokenizer':'./paddlepalm/tokenizer',
'paddlepalm.utils':'./paddlepalm/utils'}, 'paddlepalm.utils':'./paddlepalm/utils'},
platforms = "any", platforms = "any",
...@@ -64,7 +68,7 @@ setuptools.setup( ...@@ -64,7 +68,7 @@ setuptools.setup(
'Programming Language :: Python :: 3.7', 'Programming Language :: Python :: 3.7',
], ],
install_requires = [ install_requires = [
'paddlepaddle-gpu>=1.6.1' 'paddlepaddle-gpu>=1.6.3'
] ]
) )
......
# -*- coding: UTF-8 -*-
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import paddle.fluid as fluid
from paddle.fluid import layers
from paddlepalm.interface import task_paradigm
import numpy as np
import os
class TaskParadigm(task_paradigm):
'''
matching
'''
def __init__(self, config, phase, backbone_config=None):
self._is_training = phase == 'train'
self._hidden_size = backbone_config['hidden_size']
if 'initializer_range' in config:
self._param_initializer = config['initializer_range']
else:
self._param_initializer = fluid.initializer.TruncatedNormal(
scale=backbone_config.get('initializer_range', 0.02))
if 'dropout_prob' in config:
self._dropout_prob = config['dropout_prob']
else:
self._dropout_prob = backbone_config.get('hidden_dropout_prob', 0.0)
self._pred_output_path = config.get('pred_output_path', None)
self._preds = []
@property
def inputs_attrs(self):
if self._is_training:
reader = {"label_ids": [[-1, 1], 'int64']}
else:
reader = {}
bb = {"sentence_pair_embedding": [[-1, self._hidden_size], 'float32']}
return {'reader': reader, 'backbone': bb}
@property
def outputs_attrs(self):
if self._is_training:
return {"loss": [[1], 'float32']}
else:
return {"logits": [[-1, 2], 'float32']}
def build(self, inputs, scope_name=""):
if self._is_training:
labels = inputs["reader"]["label_ids"]
cls_feats = inputs["backbone"]["sentence_pair_embedding"]
if self._is_training:
cls_feats = fluid.layers.dropout(
x=cls_feats,
dropout_prob=self._dropout_prob,
dropout_implementation="upscale_in_train")
logits = fluid.layers.fc(
input=cls_feats,
size=2,
param_attr=fluid.ParamAttr(
name=scope_name+"cls_out_w",
initializer=self._param_initializer),
bias_attr=fluid.ParamAttr(
name=scope_name+"cls_out_b",
initializer=fluid.initializer.Constant(0.)))
if self._is_training:
ce_loss, probs = fluid.layers.softmax_with_cross_entropy(
logits=logits, label=labels, return_softmax=True)
loss = fluid.layers.mean(x=ce_loss)
return {'loss': loss}
else:
return {'logits': logits}
def postprocess(self, rt_outputs):
if not self._is_training:
logits = rt_outputs['logits']
preds = np.argmax(logits, -1)
self._preds.extend(preds.tolist())
def epoch_postprocess(self, post_inputs):
# there is no post_inputs needed and not declared in epoch_inputs_attrs, hence no elements exist in post_inputs
if not self._is_training:
if self._pred_output_path is None:
raise ValueError('argument pred_output_path not found in config. Please add it into config dict/file.')
with open(os.path.join(self._pred_output_path, 'predictions.json'), 'w') as writer:
for p in self._preds:
writer.write(str(p)+'\n')
print('Predictions saved at '+os.path.join(self._pred_output_path, 'predictions.json'))
# -*- coding: UTF-8 -*-
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import paddle.fluid as fluid
from paddlepalm.interface import task_paradigm
from paddle.fluid import layers
from paddlepalm.backbone.utils.transformer import pre_process_layer
class TaskParadigm(task_paradigm):
'''
matching
'''
def __init__(self, config, phase, backbone_config=None):
self._is_training = phase == 'train'
self._emb_size = backbone_config['hidden_size']
self._hidden_size = backbone_config['hidden_size']
self._vocab_size = backbone_config['vocab_size']
self._hidden_act = backbone_config['hidden_act']
self._initializer_range = backbone_config['initializer_range']
@property
def inputs_attrs(self):
reader = {
"mask_label": [[-1, 1], 'int64'],
"mask_pos": [[-1, 1], 'int64']}
if not self._is_training:
del reader['mask_label']
del reader['batchsize_x_seqlen']
bb = {
"encoder_outputs": [[-1, -1, self._hidden_size], 'float32'],
"embedding_table": [[-1, self._vocab_size, self._emb_size], 'float32']}
return {'reader': reader, 'backbone': bb}
@property
def outputs_attrs(self):
if self._is_training:
return {"loss": [[1], 'float32']}
else:
return {"logits": [[-1], 'float32']}
def build(self, inputs, scope_name=""):
mask_pos = inputs["reader"]["mask_pos"]
if self._is_training:
mask_label = inputs["reader"]["mask_label"]
max_position = inputs["reader"]["batchsize_x_seqlen"] - 1
mask_pos = fluid.layers.elementwise_min(mask_pos, max_position)
mask_pos.stop_gradient = True
word_emb = inputs["backbone"]["embedding_table"]
enc_out = inputs["backbone"]["encoder_outputs"]
emb_size = word_emb.shape[-1]
_param_initializer = fluid.initializer.TruncatedNormal(
scale=self._initializer_range)
reshaped_emb_out = fluid.layers.reshape(
x=enc_out, shape=[-1, emb_size])
# extract masked tokens' feature
mask_feat = fluid.layers.gather(input=reshaped_emb_out, index=mask_pos)
# transform: fc
mask_trans_feat = fluid.layers.fc(
input=mask_feat,
size=emb_size,
act=self._hidden_act,
param_attr=fluid.ParamAttr(
name=scope_name+'mask_lm_trans_fc.w_0',
initializer=_param_initializer),
bias_attr=fluid.ParamAttr(name=scope_name+'mask_lm_trans_fc.b_0'))
# transform: layer norm
mask_trans_feat = pre_process_layer(
mask_trans_feat, 'n', name=scope_name+'mask_lm_trans')
mask_lm_out_bias_attr = fluid.ParamAttr(
name=scope_name+"mask_lm_out_fc.b_0",
initializer=fluid.initializer.Constant(value=0.0))
fc_out = fluid.layers.matmul(
x=mask_trans_feat,
y=word_emb,
transpose_y=True)
fc_out += fluid.layers.create_parameter(
shape=[self._vocab_size],
dtype='float32',
attr=mask_lm_out_bias_attr,
is_bias=True)
if self._is_training:
mask_lm_loss = fluid.layers.softmax_with_cross_entropy(
logits=fc_out, label=mask_label)
loss = fluid.layers.mean(mask_lm_loss)
return {'loss': loss}
else:
return {'logits': fc_out}
# -*- coding: UTF-8 -*-
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import paddle.fluid as fluid
from paddlepalm.interface import task_paradigm
import collections
import numpy as np
import os
import math
import six
import paddlepalm.tokenizer.ernie_tokenizer as tokenization
import json
RawResult = collections.namedtuple("RawResult",
["unique_id", "start_logits", "end_logits"])
class TaskParadigm(task_paradigm):
""""""
def __init__(self, config, phase, backbone_config=None):
self._is_training = phase == 'train'
self._max_sequence_length = config['max_seq_len']
self._hidden_size = backbone_config['hidden_size']
self._pred_results = []
if phase == 'pred':
self._max_answer_length = config.get('max_answer_len', None)
self._null_score_diff_threshold = config.get('null_score_diff_threshold', 0.0)
self._n_best_size = config.get('n_best_size', 20)
self._pred_output_path = config.get('pred_output_path', None)
self._verbose = config.get('verbose', False)
self._with_negative = config.get('with_negative', False)
self._do_lower_case = config.get('do_lower_case', False)
@property
def inputs_attrs(self):
if self._is_training:
reader = {"start_positions": [[-1, 1], 'int64'],
"end_positions": [[-1, 1], 'int64'],
}
else:
reader = {'unique_ids': [[-1, 1], 'int64']}
bb = {"encoder_outputs": [[-1, -1, self._hidden_size], 'float32']}
return {'reader': reader, 'backbone': bb}
@property
def epoch_inputs_attrs(self):
if not self._is_training:
from_reader = {'examples': None, 'features': None}
return {'reader': from_reader}
@property
def outputs_attr(self):
if self._is_training:
return {'loss': [[1], 'float32']}
else:
return {'start_logits': [[-1, -1, 1], 'float32'],
'end_logits': [[-1, -1, 1], 'float32'],
'unique_ids': [[-1, 1], 'int64']}
def build(self, inputs, scope_name=""):
if self._is_training:
start_positions = inputs['reader']['start_positions']
end_positions = inputs['reader']['end_positions']
max_position = inputs["reader"]["seqlen"] - 1
start_positions = fluid.layers.elementwise_min(start_positions, max_position)
end_positions = fluid.layers.elementwise_min(end_positions, max_position)
start_positions.stop_gradient = True
end_positions.stop_gradient = True
else:
unique_id = inputs['reader']['unique_ids']
enc_out = inputs['backbone']['encoder_outputs']
logits = fluid.layers.fc(
input=enc_out,
size=2,
num_flatten_dims=2,
param_attr=fluid.ParamAttr(
name=scope_name+"cls_squad_out_w",
initializer=fluid.initializer.TruncatedNormal(scale=0.02)),
bias_attr=fluid.ParamAttr(
name=scope_name+"cls_squad_out_b", initializer=fluid.initializer.Constant(0.)))
logits = fluid.layers.transpose(x=logits, perm=[2, 0, 1])
start_logits, end_logits = fluid.layers.unstack(x=logits, axis=0)
def _compute_single_loss(logits, positions):
"""Compute start/end loss for mrc model"""
loss = fluid.layers.softmax_with_cross_entropy(
logits=logits, label=positions)
loss = fluid.layers.mean(x=loss)
return loss
if self._is_training:
start_loss = _compute_single_loss(start_logits, start_positions)
end_loss = _compute_single_loss(end_logits, end_positions)
total_loss = (start_loss + end_loss) / 2.0
return {'loss': total_loss}
else:
return {'start_logits': start_logits,
'end_logits': end_logits,
'unique_ids': unique_id}
def postprocess(self, rt_outputs):
"""this func will be called after each step(batch) of training/evaluating/predicting process."""
if not self._is_training:
unique_ids = np.squeeze(rt_outputs['unique_ids'], -1)
start_logits = rt_outputs['start_logits']
end_logits = rt_outputs['end_logits']
for idx in range(len(unique_ids)):
if unique_ids[idx] < 0:
continue
if len(self._pred_results) % 1000 == 0:
print("Predicting example: {}".format(len(self._pred_results)))
uid = int(unique_ids[idx])
s = [float(x) for x in start_logits[idx].flat]
e = [float(x) for x in end_logits[idx].flat]
self._pred_results.append(
RawResult(
unique_id=uid,
start_logits=s,
end_logits=e))
def epoch_postprocess(self, post_inputs):
"""(optional interface) this func will be called after evaluation/predicting process and each epoch during training process."""
if not self._is_training:
if self._pred_output_path is None:
raise ValueError('argument pred_output_path not found in config. Please add it into config dict/file.')
examples = post_inputs['reader']['examples']
features = post_inputs['reader']['features']
if not os.path.exists(self._pred_output_path):
os.makedirs(self._pred_output_path)
output_prediction_file = os.path.join(self._pred_output_path, "predictions.json")
output_nbest_file = os.path.join(self._pred_output_path, "nbest_predictions.json")
output_null_log_odds_file = os.path.join(self._pred_output_path, "null_odds.json")
_write_predictions(examples, features, self._pred_results,
self._n_best_size, self._max_answer_length,
self._do_lower_case, output_prediction_file,
output_nbest_file, output_null_log_odds_file,
self._with_negative,
self._null_score_diff_threshold, self._verbose)
def _write_predictions(all_examples, all_features, all_results, n_best_size,
max_answer_length, do_lower_case, output_prediction_file,
output_nbest_file, output_null_log_odds_file,
with_negative, null_score_diff_threshold,
verbose):
"""Write final predictions to the json file and log-odds of null if needed."""
print("Writing predictions to: %s" % (output_prediction_file))
print("Writing nbest to: %s" % (output_nbest_file))
example_index_to_features = collections.defaultdict(list)
for feature in all_features:
example_index_to_features[feature.example_index].append(feature)
unique_id_to_result = {}
for result in all_results:
unique_id_to_result[result.unique_id] = result
_PrelimPrediction = collections.namedtuple( # pylint: disable=invalid-name
"PrelimPrediction", [
"feature_index", "start_index", "end_index", "start_logit",
"end_logit"
])
all_predictions = collections.OrderedDict()
all_nbest_json = collections.OrderedDict()
scores_diff_json = collections.OrderedDict()
for (example_index, example) in enumerate(all_examples):
features = example_index_to_features[example_index]
prelim_predictions = []
# keep track of the minimum score of null start+end of position 0
score_null = 1000000 # large and positive
min_null_feature_index = 0 # the paragraph slice with min mull score
null_start_logit = 0 # the start logit at the slice with min null score
null_end_logit = 0 # the end logit at the slice with min null score
for (feature_index, feature) in enumerate(features):
result = unique_id_to_result[feature.unique_id]
start_indexes = _get_best_indexes(result.start_logits, n_best_size)
end_indexes = _get_best_indexes(result.end_logits, n_best_size)
# if we could have irrelevant answers, get the min score of irrelevant
if with_negative:
feature_null_score = result.start_logits[0] + result.end_logits[
0]
if feature_null_score < score_null:
score_null = feature_null_score
min_null_feature_index = feature_index
null_start_logit = result.start_logits[0]
null_end_logit = result.end_logits[0]
for start_index in start_indexes:
for end_index in end_indexes:
# We could hypothetically create invalid predictions, e.g., predict
# that the start of the span is in the question. We throw out all
# invalid predictions.
if start_index >= len(feature.tokens):
continue
if end_index >= len(feature.tokens):
continue
if start_index not in feature.token_to_orig_map:
continue
if end_index not in feature.token_to_orig_map:
continue
if not feature.token_is_max_context.get(start_index, False):
continue
if end_index < start_index:
continue
length = end_index - start_index + 1
if length > max_answer_length:
continue
prelim_predictions.append(
_PrelimPrediction(
feature_index=feature_index,
start_index=start_index,
end_index=end_index,
start_logit=result.start_logits[start_index],
end_logit=result.end_logits[end_index]))
if with_negative:
prelim_predictions.append(
_PrelimPrediction(
feature_index=min_null_feature_index,
start_index=0,
end_index=0,
start_logit=null_start_logit,
end_logit=null_end_logit))
prelim_predictions = sorted(
prelim_predictions,
key=lambda x: (x.start_logit + x.end_logit),
reverse=True)
_NbestPrediction = collections.namedtuple( # pylint: disable=invalid-name
"NbestPrediction", ["text", "start_logit", "end_logit"])
seen_predictions = {}
nbest = []
for pred in prelim_predictions:
if len(nbest) >= n_best_size:
break
feature = features[pred.feature_index]
if pred.start_index > 0: # this is a non-null prediction
tok_tokens = feature.tokens[pred.start_index:(pred.end_index + 1
)]
orig_doc_start = feature.token_to_orig_map[pred.start_index]
orig_doc_end = feature.token_to_orig_map[pred.end_index]
orig_tokens = example.doc_tokens[orig_doc_start:(orig_doc_end +
1)]
tok_text = " ".join(tok_tokens)
# De-tokenize WordPieces that have been split off.
tok_text = tok_text.replace(" ##", "")
tok_text = tok_text.replace("##", "")
# Clean whitespace
tok_text = tok_text.strip()
tok_text = " ".join(tok_text.split())
orig_text = " ".join(orig_tokens)
final_text = _get_final_text(tok_text, orig_text, do_lower_case,
verbose)
if final_text in seen_predictions:
continue
seen_predictions[final_text] = True
else:
final_text = ""
seen_predictions[final_text] = True
nbest.append(
_NbestPrediction(
text=final_text,
start_logit=pred.start_logit,
end_logit=pred.end_logit))
# if we didn't inlude the empty option in the n-best, inlcude it
if with_negative:
if "" not in seen_predictions:
nbest.append(
_NbestPrediction(
text="",
start_logit=null_start_logit,
end_logit=null_end_logit))
# In very rare edge cases we could have no valid predictions. So we
# just create a nonce prediction in this case to avoid failure.
if not nbest:
nbest.append(
_NbestPrediction(
text="empty", start_logit=0.0, end_logit=0.0))
assert len(nbest) >= 1
total_scores = []
best_non_null_entry = None
for entry in nbest:
total_scores.append(entry.start_logit + entry.end_logit)
if not best_non_null_entry:
if entry.text:
best_non_null_entry = entry
# debug
if best_non_null_entry is None:
print("Emmm..., sth wrong")
probs = _compute_softmax(total_scores)
nbest_json = []
for (i, entry) in enumerate(nbest):
output = collections.OrderedDict()
output["text"] = entry.text
output["probability"] = probs[i]
output["start_logit"] = entry.start_logit
output["end_logit"] = entry.end_logit
nbest_json.append(output)
assert len(nbest_json) >= 1
if not with_negative:
all_predictions[example.qas_id] = nbest_json[0]["text"]
else:
# predict "" iff the null score - the score of best non-null > threshold
score_diff = score_null - best_non_null_entry.start_logit - (
best_non_null_entry.end_logit)
scores_diff_json[example.qas_id] = score_diff
if score_diff > null_score_diff_threshold:
all_predictions[example.qas_id] = ""
else:
all_predictions[example.qas_id] = best_non_null_entry.text
all_nbest_json[example.qas_id] = nbest_json
with open(output_prediction_file, "w") as writer:
writer.write(json.dumps(all_predictions, indent=4) + "\n")
with open(output_nbest_file, "w") as writer:
writer.write(json.dumps(all_nbest_json, indent=4) + "\n")
if with_negative:
with open(output_null_log_odds_file, "w") as writer:
writer.write(json.dumps(scores_diff_json, indent=4) + "\n")
def _get_final_text(pred_text, orig_text, do_lower_case, verbose):
"""Project the tokenized prediction back to the original text."""
# When we created the data, we kept track of the alignment between original
# (whitespace tokenized) tokens and our WordPiece tokenized tokens. So
# now `orig_text` contains the span of our original text corresponding to the
# span that we predicted.
#
# However, `orig_text` may contain extra characters that we don't want in
# our prediction.
#
# For example, let's say:
# pred_text = steve smith
# orig_text = Steve Smith's
#
# We don't want to return `orig_text` because it contains the extra "'s".
#
# We don't want to return `pred_text` because it's already been normalized
# (the MRQA eval script also does punctuation stripping/lower casing but
# our tokenizer does additional normalization like stripping accent
# characters).
#
# What we really want to return is "Steve Smith".
#
# Therefore, we have to apply a semi-complicated alignment heruistic between
# `pred_text` and `orig_text` to get a character-to-charcter alignment. This
# can fail in certain cases in which case we just return `orig_text`.
def _strip_spaces(text):
ns_chars = []
ns_to_s_map = collections.OrderedDict()
for (i, c) in enumerate(text):
if c == " ":
continue
ns_to_s_map[len(ns_chars)] = i
ns_chars.append(c)
ns_text = "".join(ns_chars)
return (ns_text, ns_to_s_map)
# We first tokenize `orig_text`, strip whitespace from the result
# and `pred_text`, and check if they are the same length. If they are
# NOT the same length, the heuristic has failed. If they are the same
# length, we assume the characters are one-to-one aligned.
tokenizer = tokenization.BasicTokenizer(do_lower_case=do_lower_case)
tok_text = " ".join(tokenizer.tokenize(orig_text))
start_position = tok_text.find(pred_text)
if start_position == -1:
if verbose:
print("Unable to find text: '%s' in '%s'" % (pred_text, orig_text))
return orig_text
end_position = start_position + len(pred_text) - 1
(orig_ns_text, orig_ns_to_s_map) = _strip_spaces(orig_text)
(tok_ns_text, tok_ns_to_s_map) = _strip_spaces(tok_text)
if len(orig_ns_text) != len(tok_ns_text):
if verbose:
print("Length not equal after stripping spaces: '%s' vs '%s'",
orig_ns_text, tok_ns_text)
return orig_text
# We then project the characters in `pred_text` back to `orig_text` using
# the character-to-character alignment.
tok_s_to_ns_map = {}
for (i, tok_index) in six.iteritems(tok_ns_to_s_map):
tok_s_to_ns_map[tok_index] = i
orig_start_position = None
if start_position in tok_s_to_ns_map:
ns_start_position = tok_s_to_ns_map[start_position]
if ns_start_position in orig_ns_to_s_map:
orig_start_position = orig_ns_to_s_map[ns_start_position]
if orig_start_position is None:
if verbose:
print("Couldn't map start position")
return orig_text
orig_end_position = None
if end_position in tok_s_to_ns_map:
ns_end_position = tok_s_to_ns_map[end_position]
if ns_end_position in orig_ns_to_s_map:
orig_end_position = orig_ns_to_s_map[ns_end_position]
if orig_end_position is None:
if verbose:
print("Couldn't map end position")
return orig_text
output_text = orig_text[orig_start_position:(orig_end_position + 1)]
return output_text
def _get_best_indexes(logits, n_best_size):
"""Get the n-best logits from a list."""
index_and_score = sorted(
enumerate(logits), key=lambda x: x[1], reverse=True)
best_indexes = []
for i in range(len(index_and_score)):
if i >= n_best_size:
break
best_indexes.append(index_and_score[i][0])
return best_indexes
def _compute_softmax(scores):
"""Compute softmax probability over raw logits."""
if not scores:
return []
max_score = None
for score in scores:
if max_score is None or score > max_score:
max_score = score
exp_scores = []
total_sum = 0.0
for score in scores:
x = math.exp(score - max_score)
exp_scores.append(x)
total_sum += x
probs = []
for score in exp_scores:
probs.append(score / total_sum)
return probs
label text_a
1 when was the last time the san antonio spurs missed the playoffshave only missed the playoffs four times since entering the NBA ; they have not missed the playoffs in the 20 seasons since Tim Duncan was drafted by the Spurs in 1997 . With their 50th win in the 2016 -- 17 season , the Spurs extended their record for most consecutive 50 - win seasons to 18 ( the Spurs did not
0 the creation of the federal reserve system was an attempt toReserve System ( also known as the Federal Reserve or simply the Fed ) is the central banking system of the United States of America . Over the years , events such as the Great Depression in the 1930s and the Great Recession during the 2000s have led to the expansion of the
2 group f / 64 was a major backlash against the earlier photographic movement off / 64 was formed , Edward Weston went to a meeting of the John Reed Club , which was founded to support Marxist artists and writers . These circumstances not only helped set up the situation in which a group
0 Bessarabia eventually became under the control of which country?city of Vilnius – its historical capital, which was under Polish control during the inter-war
0 Iran's inflation led to what in 1975-1976?the economy of Iran was flooded with foreign currency, which caused inflation. By 1974, the economy of Iran was experiencing double digit inflation, and despite many large projects to modernize the country, corruption was rampant and caused large
1 How many steam warships did Japan have in 1867?Yokosuka and Nagasaki. By the end of the Tokugawa shogunate in 1867, the Japanese navy of the shogun already possessed eight western-style steam warships around the flagship Kaiyō Maru, which were used against pro-imperial forces during the Boshin war, under the command
0 How many people were inside?f former NFL head coach Dan Reeves, suffered a broken back. DeCamillis was seen on a stretcher wearing a neck brace. A line of heavy thunderstorms was moving through the Dallas area at the time, he said, but no other damage to buildings was reported, said Mike Adams, a dispatcher for the Irving, Texas, fire department. Watch the roof collapse on players, coaches » Arnold Payne, a photographer for WFAA, was shooting the Cowboys' practice session when rain began falling "tremendously hard." "I noticed the walls started to waver ... and then I noticed that the lights that were hanging from the ceiling started to sway, and it wouldn't stop," Payne told CNN. Shortly after that, he said, "It was as if someone took a stick pin and hit a balloon." Watch Payne describe being inside when structure collpased » Payne said
0 Ishita Dutta is the sister of an actress who is typically cast in what genre of movies?he suspense thriller film "Drishyam" (2015) and the Hindi soap opera "Ek Ghar Banaunga", that aired on Star Plus. She is the younger sister of actress Tanushree Dutta. Dutta is the recipient of Femina Miss India Universe title in 2004. During the same year
3 when did the the civil war start and end/Th> </Tr> <Tr> <Td> <P> 110,000 + killed in action / died of wounds 230,000 + accident / disease deaths 25,000 -- 30,000 died in Confederate prisons </P> <P> 365,000 + total dead
1 What has Pakistan told phone companies?Islamabad, Pakistan (CNN) -- Under heavy criticism for a telling cell phone carriers to ban certain words in text messages, the Pakistan Telecommunication Authority went into damage control mode Wednesday. PTA spokesman Mohammed Younis Wednesday denied the existence of the plan, which has met with derision from mobile phone users in the country. "If at all we finally decide to
0 What did Bush say the proposal was to a proposal he vetoed before?(CNN) -- President Bush vetoed an expansion of the federally funded, state-run health insurance program for poor children for a second time Wednesday, telling Congress the bill "moves our country's health care system in the wrong direction." In his veto message, President Bush calls on Congress to extend funding for the current program. "Because the Congress has chosen to send me an essentially identical bill that has the same problems as the flawed bill I previously vetoed, I must veto this legislation, too," he said in a statement released by the White House. The bill would
0 Where did the football team that Bob Simmons coached from 1995 to 2000 play their home games?Cowboys football team [SEP] The 1998 Oklahoma State Cowboys football team represented the Oklahoma State University during the 1998 NCAA Division I-A football season. They participated as members of the Big 12 Conference in the South Division. They were coached by head coach Bob Simmons. [PAR] [TLE] Bob Simmons (American football coach) [SEP] Bob
2 What anniversary was recently celebrated in Iran?us to move our policy in a new direction," Obama said. "So there are going to be a set of objectives that we have in these conversations, but I think that there's the possibility at least of a relationship of mutual respect and progress." The United States and Iran have not had diplomatic relations since 1979. During that year, the Shah of Iran was forced to flee the country and the Ayatollah Khomeini took power. Later that year, Iranian students took over and seized hostages at the U.S. Embassy. Relations have been cut since then. U.S. President George W. Bush labeled Iran as a member of the "axis of evil" after the Sept. 11, 2001 attacks. Iran celebrated the 30th anniversary of the revolution Tuesday with crowds chanting "Death to America." Watch the parade in Iran » Tensions have rippled over issues such as Iran's nuclear program, Israel, and Iraq, and have been aggravated since the outspoken Ahmadinejad came to power in 2005. Western
1 Which Italian composer did George Balanchine add in 1976?[PAR] [TLE] Arcangelo Corelli [SEP] Arcangelo Corelli ( ; 17 February 1653 – 8 January 1713) was an Italian violinist and composer of the Baroque era. His music
0 Will the playstation 4 be announced?a new system sometime in the next five years, of course. Sony continued to sell the PlayStation 2 system and games years after the PlayStation 3 debuted in stores. For Sony's next console, the company will not deploy a streaming delivery system like OnLive, or fully cut out disc retailers like Best Buy and GameStop, Hirai said. While Sony has increased the number of games and other media available for download or streaming through its networks, most people cannot be expected to frequently download several gigabytes worth of data, which can be a time-consuming process, he said. Sony Computer Entertainment president Andrew House said earlier that Sony is not planning to discuss a new console, the website ComputerAndVideogames.com reported on Monday.
1 How many children were the Americans trying to kidnap out of Haiti?Port-au-Prince, Haiti (CNN) -- A Haitian attorney representing 10 Americans charged with kidnapping for trying to take 33 children out of Haiti told CNN Sunday he has resigned. Edwin Coq said he had quit as a lawyer for the Americans. It wasn't immediately clear who would replace him. "I know that they have been looking at other lawyers," said Phyllis Allison, mother of one of those detained, Jim Allen. "They don't know what to do." The 10 missionaries, including group leader Laura Silsby, were charged Thursday with kidnapping children and criminal association. Coq had said that court hearings would be held Monday
0 who kills tree gelbman in happy death dayTree convinces Carter of her predicament by showing that she holds foreknowledge of the day 's events . Tree admits to Carter she does n't like who
0 What will no person be denied the enjoyment of in Georgia based on their religious principles?amended as follows: "Article IV. Section 10. No person within this state shall, upon any pretense, be deprived of the inestimable privilege of worshipping God in any
0 who came up with the idea of footballpass . The popularity of college football grew as it became the dominant version of the sport in the United States for the first half of the 20th century . Bowl games , a college football tradition , attracted a national audience for college
0 what is the name of the female smurfbefore the smurflings created Sassette , Smurfette was the only female smurf in the Smurf Village .
3 Who contributed to the American studies programs at Yale and University of Wyoming?struggle. Norman Holmes Pearson, who worked for the Office of Strategic Studies in London during World War II, returned to Yale and headed the new American studies program, in which scholarship quickly became an instrument of promoting
0 What is the group's former name that now has an office with the Chief Actuary besides the Social Security Administration?Office of the Chief Actuary [SEP] The Office of the Chief Actuary is a government agency that has responsibility for actuarial estimates regarding social welfare programs. In Canada, the Office of the Chief Actuary works with the Canada Pension Plan and the Old Age Security Program. In the United States, both the Social Security Administration and the Centers for Medicare and Medicaid Services have an Office of the Chief Actuary that deals with Social Security and Medicare, respectively. A similar agency in the United Kingdom is called the Government Actuary's Department
0 The actor that playes Han Solo in the "Star Wars" film series stars with Blake Lively and Michiel Huisman in a film directed by who?about a woman who stops aging after an accident at the age of 29. Mills Goodloe and Salvador Paskowitz. The film stars Blake Lively, Michiel Huisman, Kathy Baker, Amanda Crew, Harrison Ford, and Ellen Burstyn. The film was theatrically released on April 24, 2015 by Lionsgate. [PAR] [TLE] Harrison Ford [SEP] Harrison Ford
0 What historically black university's men's basketball coach was formerly head coach at Virginia Tech?well as an 1890 Historically Black Land-Grant University. The University is a member-school of the Thurgood Marshall College Fund. He was also the head coach at Virginia Tech, Tennessee
0 what year did syracuse win the ncaa tournament. Their combined record is 67 -- 39 .
1 where do i get chips at a casino<P> Money is exchanged for tokens in a casino at the casino cage , at the gaming tables , or at a cashier station . The tokens are
0 when was the winter fuel payment first introducedheating over the winter months .
0 Trophy hunting can include areas which would likely be unsuitable for what other types of ecotourism?study states that less than 3% of a trophy hunters' expenditures reach the local level, meaning that the economic incentive and benefit is "minimal, particularly when we consider the vast areas of
1 In simple language, what are the interconnections in an embedding matrix?Since it was quite easy to stack interconnections (wires) inside the embedding matrix, the approach allowed designers to forget completely about the routing of wires (usually a time-consuming operation of PCB design): Anywhere the designer needs a connection, the machine will draw a wire in straight line from one location/pin
2 rho has been to the most all star games in baseballn4 </Li> <Li> Stan Musial 24 </Li>
0 In 1169, Ireland was invaded by which people?High King to ensure the terms of the Treaty of Windsor led Henry II, as King of England, to rule as effective monarch under the title of Lord of Ireland. This title was granted to his younger son but when Henry's heir unexpectedly died the title of King of England and Lord of Ireland became entwined in one
1 What year did a biracial Populist fusion gain the Governors office?to the legislature and governor's office, but the Populists attracted voters displeased with them. In 1896 a biracial, Populist-Republican Fusionist coalition gained the governor's office. The Democrats regained control of the legislature
1 nearest metro station to majnu ka tilla delhiRing Road of Delhi . It is at a walkable distance from ISBT Kashmere Gate . It is approachable through the Kashmeri Gate station of the Delhi Metro , lies on both the Red ( Dilshad Garden - Rithala ) and Yellow Lines ( Samaypur
3 where is california located in the region of the united states<P> California is a U.S. state in the Pacific Region of the United States . With 39.5 million residents , California is the most populous state in the United States and the third largest by area . The
1 when did the baptist church start in americacoworker for religious freedom , are variously credited as founding the earliest Baptist church in North America . In 1639 , Williams established a Baptist church in Providence , Rhode Island , and Clarke began a Baptist church in
0 where was the first capital of the united states locatedpassed to pave the way for a permanent capital . The decision to locate the capital was contentious , but Alexander Hamilton helped broker a compromise in which the federal government would take on war debt incurred during the American Revolutionary War , in exchange for support from northern states for locating the capital along the Potomac
0 What will new regulations will reduce?products off of the consumer market," said Michael Fry, director of conservation advocacy for the American Bird Conservancy. "By putting these restrictions in place, they are allowing a compromise to be made between themselves and organizations who have been working on this problem for a long time." The EPA's new measures, which were handed down Thursday, require that rat poisons be kept in bait stations above ground and in containers that meet agency standards. Loose bait, such as pellets, and the four most hazardous types of pesticides, known as "second-generation anticoagulants," will no longer be sold for personal use. Under the new restrictions, only farmers, livestock owners and certified rodent control employees will be allowed to purchase rat poison in bulk. Bags larger than 8 pounds will no longer be sold at hardware and home-improvement stores. Children who come into contact
0 who played lois lane in the man of steelmixture of toughness and vulnerability , but Peter Bradshaw thought that the character was `` sketchily conceived '' and criticized her lack of chemistry with Cavill . Even so , the film earned over $660 million to become one of her biggest box
0 What year did the writer of the 1968 novel "The Iron Man" become Poet Laurete?Giant is a 1999 American animated science-fiction comedy-drama action film using both traditional animation and computer animation, produced by and directed by Brad Bird in his directorial debut. It is based on the 1968 novel "The Iron Man" by Ted Hughes (which was published in the United States as "The Iron Giant") and was scripted by Tim McCanlies from a story treatment by Bird. The film stars the voices of Eli Marienthal,
2 The conquest of Nice was an effort by Suleiman and what French king?allies. A month prior to the siege of Nice, France supported the Ottomans with an artillery unit during the 1543 Ottoman conquest of Esztergom in northern Hungary. After further advances by the Turks, the Habsburg ruler Ferdinand officially recognized Ottoman ascendancy in Hungary in
0 when was the vaccine receivedfor swine flu, also known as 2009 H1N1, using reverse genetics, he said. "Suitable viruses will hopefully be sent to manufacturers by end of next week," Skinner wrote. Once that happens, vaccine makers will tweak the virus and have "pilot lots" of vaccine ready to be tested by mid- to late June. Several thousand cases have been reported
1 What is the nationality of the actor who costarred with Matt LeBlanc in "All the Queen's Men"?n approximate -99.92% return. [PAR] [TLE] Eddie Izzard [SEP] Edward John "Eddie" Izzard ( ; born 7 February 1962) is an English stand-up comedian, actor, writer and political activist. His comedic style takes the form of rambling, whimsical monologue, and self-referential pantomime. He
0 What sickened thousands of children?executives detained, a local official said, according to Xinhua, Initial tests showed more than 1,300 children in the Hunan province town of Wenping have excessive lead in their blood from the Wugang Manganese Smelting Plant. A second round of testing has been ordered to confirm the results. The plant opened in May 2008 without gaining the approval of the local environment protection bureau, said Huang Wenbin, a deputy environment chief in Wugang City, Xinhua reported. The plant was within 500 meters (about a quarter mile) of three schools. The
0 What percentage of the population are the Kpelle?are descendants of African American and West Indian, mostly Barbadian settlers, make up 2.5%. Congo people, descendants of repatriated Congo and Afro-Caribbean
1 Amount of people left homeless?86 dead, the state news agency said. About 30 people are missing, the official news agency Agencia Brasil said, citing civil defense officials. Earlier reports had indicated as many as 100 people were dead. In addition, more than 54,000 residents have been left homeless, and another 1.5 million have been affected by the heavy rains, the state news agency reported. Brazilian President Luiz Inacio Lula da Silva announced he will release nearly 700 million reais ($350 million)
2 What other countries were in disagreement with the United Nations decision on Burma ?that strongly called upon the government of Myanmar to end its systematic violations of human rights. In January 2007, Russia and China vetoed a
0 Besides Barcelona and Real Madrid, what other team has remained in the Primera Division?first football club to win six out of six competitions in a single year, completing the sextuple in also winning the Spanish Super Cup, UEFA Super Cup and FIFA Club World Cup. In 2011, the club became
0 William Frederick Truax, is a former professional American football tight end in the National Football League (NFL) from 1964 to 1973 for the Los Angeles Rams and the Dallas Cowboys, following the 1970 NFL season, Truax was traded by the Rams to the Cowboys for wide receiver Lance Rentzel, a former American football flanker, in which organization?in New Orleans and college football at Louisiana State University and was drafted in the second round of the 1964 NFL draft. Following the 1970 NFL season, Truax was traded by the Rams to the Cowboys for wide receiver Lance Rentzel. He was part of the Cowboys' Super Bowl VI championship team in 1971. He played
3 What year did Chopin learn that the uprising in Warsaw was crushed?enlist. Chopin, now alone in Vienna, was nostalgic for his homeland, and wrote to a friend, "I curse the moment of my departure." When in September 1831 he learned, while travelling from Vienna to Paris, that the uprising had been crushed, he expressed his anguish in the pages of his private journal: "Oh
1 where do they make money in washington dc; all coinage is produced by the United States Mint . With production facilities in Washington , DC , and Fort Worth , Texas , the Bureau of Engraving and Printing is the largest producer of government security documents in the United States . </P>
0 What did a researcher compare this process to?which makes it one of the highest rates of maternal mortality in the Americas. In wealthy developed nations, only nine women die for every 100,000 births. The five main causes of pregnancy-related deaths in Peru are hemorrhage, pre-eclampsia, infection, complications following abortion and obstructed birth, according to Peru's Ministry of Health figures. Amnesty's Peru researcher Nuria Garcia said, in a written statement: "The rates of maternal mortality in Peru are scandalous. The fact that so many women are dying from preventable causes is a human rights violation. "The Peruvian state is simply ignoring
0 How many containers can Longtan Containers Port Area handle?Port of Nanjing is the largest inland port in China, with annual cargo tonnage reached 191,970,000 t in 2012. The port area is 98 kilometres (61 mi) in length and has 64 berths
0 The 2011 New York City Marathon was sponsored by which Dutch multinational banking corporation?are retail banking, direct banking, commercial banking, investment banking, asset management, and insurance services. ING is an abbreviation for "Internationale Nederlanden Groep " (English: International Netherlands Group). [PAR] [TLE] 2011 New York City Marathon [SEP] The 42nd New York City Marathon took
0 What is human flourishing?it does not involve believing that human nature is purely good or that all people can live up to the Humanist ideals without help. If anything, there is recognition that living up to one's potential is hard
0 What was the result of Dida appealto play in next month's Champions League match at Shakhtar Donetsk after partially winning his appeal to UEFA against a two-match ban. Dida has had one game of his two-match ban suspended for a year following an appeal to UEFA. Brazilian Dida was also fined 60,000 Swiss francs by European football's ruling body following an incident involving a supporter during the Champions clash against Celtic in Scotland on October 3. The 34-year-old Brazilian was initially banned for two games for his theatrics following a Celtic fan's encroachment onto the pitch during the 2-1 defeat at Celtic
1 What is more plentiful in capital projects?generates economic distortion in the public sector by diverting public investment into capital projects where bribes and kickbacks are more plentiful. Officials may increase the technical complexity of public sector projects to conceal or
0 where were band greeted with cheers?the United States for a show in Stamford, Connecticut, on Tuesday, after they have "a few days off to recuperate," Robinson said. The trio was the opening act for Nelson until they were loudly booed in Toronto, a day after the actor-musician's bizarre interview with a CBC radio host. Ironically, the comments that offended Canadians included Thornton's assessment that they were "very reserved" and "it doesn't matter what you say to them." "It's mashed potatoes with no gravy," Thornton told CBC host Jian Ghomeshi. "We tend to play places where people throw things at each other and here they just sort of sit there," he said. Watch Thornton's interview » The audience at Thursday night's show in Toronto loudly booed the Boxmasters, with some shouts of "Here comes the gravy!" The Toronto Star newspaper reported. Thornton's remarks about
0 What do Mexicans call Mexico City?the Federal District in Spanish: D.F., which is read "De-Efe"). They are formally called capitalinos (in reference to the city being the capital of the country), but "[p]erhaps because capitalino is the
0 where does lock stock and barrel come fromindividual components one at a time . One craftsman made the `` lock '' which would have been a `` match lock '' , `` wheel lock '' , `` flint lock '' etc .
1 who has the power to establish a prison system<P> The Federal Bureau of Prisons ( BOP ) is a United States federal law enforcement agency . A subdivision of
0 what are south americas only 2 landlocked countriessuch countries , including five partially recognised states .
...@@ -11,11 +11,11 @@ if __name__ == '__main__': ...@@ -11,11 +11,11 @@ if __name__ == '__main__':
vocab_path = './pretrain/ernie/vocab.txt' vocab_path = './pretrain/ernie/vocab.txt'
train_file = './data/cls4mrqa/train.tsv' train_file = './data/cls4mrqa/train.tsv'
predict_file = './data/cls4mrqa/dev.tsv'
config = json.load(open('./pretrain/ernie/ernie_config.json')) config = json.load(open('./pretrain/ernie/ernie_config.json'))
# ernie = palm.backbone.ERNIE(...) # ernie = palm.backbone.ERNIE(...)
ernie = palm.backbone.ERNIE.from_config(config) ernie = palm.backbone.ERNIE.from_config(config)
# pred_ernie = palm.backbone.ERNIE.from_config(config, phase='pred')
# cls_reader2 = palm.reader.cls(train_file_topic, vocab_path, batch_size, max_seqlen) # cls_reader2 = palm.reader.cls(train_file_topic, vocab_path, batch_size, max_seqlen)
# cls_reader3 = palm.reader.cls(train_file_subj, vocab_path, batch_size, max_seqlen) # cls_reader3 = palm.reader.cls(train_file_subj, vocab_path, batch_size, max_seqlen)
...@@ -24,16 +24,28 @@ if __name__ == '__main__': ...@@ -24,16 +24,28 @@ if __name__ == '__main__':
# 创建该分类任务的reader,由诸多参数控制数据集读入格式、文件数量、预处理规则等 # 创建该分类任务的reader,由诸多参数控制数据集读入格式、文件数量、预处理规则等
cls_reader = palm.reader.ClassifyReader(vocab_path, max_seqlen) cls_reader = palm.reader.ClassifyReader(vocab_path, max_seqlen)
cls_reader2 = palm.reader.ClassifyReader(vocab_path, max_seqlen)
print(cls_reader.outputs_attr) print(cls_reader.outputs_attr)
# 不同的backbone会对任务reader有不同的特征要求,例如对于分类任务,基本的输入feature为token_ids和label_ids,但是对于BERT,还要求从输入中额外提取position、segment、input_mask等特征,因此经过register后,reader会自动补充backbone所要求的字段 # 不同的backbone会对任务reader有不同的特征要求,例如对于分类任务,基本的输入feature为token_ids和label_ids,但是对于BERT,还要求从输入中额外提取position、segment、input_mask等特征,因此经过register后,reader会自动补充backbone所要求的字段
cls_reader.register_with(ernie) cls_reader.register_with(ernie)
cls_reader2.register_with(ernie)
print(cls_reader.outputs_attr) print(cls_reader.outputs_attr)
print("preparing data...")
print(cls_reader.num_examples)
cls_reader.load_data(train_file, batch_size)
cls_reader2.load_data(train_file, batch_size)
print(cls_reader.num_examples)
print('done!')
# 创建任务头(task head),如分类、匹配、机器阅读理解等。每个任务头有跟该任务相关的必选/可选参数。注意,任务头与reader是解耦合的,只要任务头依赖的数据集侧的字段能被reader提供,那么就是合法的 # 创建任务头(task head),如分类、匹配、机器阅读理解等。每个任务头有跟该任务相关的必选/可选参数。注意,任务头与reader是解耦合的,只要任务头依赖的数据集侧的字段能被reader提供,那么就是合法的
cls_head = palm.head.Classify(4, 1024, 0.1) cls_head = palm.head.Classify(4, 1024, 0.1)
# cls_pred_head = palm.head.Classify(4, 1024, 0.1, phase='pred') cls_head2 = palm.head.Classify(4, 1024, 0.1)
# 根据reader和任务头来创建一个训练器trainer,trainer代表了一个训练任务,内部维护着训练进程、和任务的关键信息,并完成合法性校验,该任务的模型保存、载入等相关规则控制 # 根据reader和任务头来创建一个训练器trainer,trainer代表了一个训练任务,内部维护着训练进程、和任务的关键信息,并完成合法性校验,该任务的模型保存、载入等相关规则控制
trainer = palm.Trainer('senti_cls', cls_reader, cls_head) trainer = palm.Trainer('cls')
trainer2 = palm.Trainer('senti_cls')
mh_trainer = palm.MultiHeadTrainer([trainer, trainer2])
# match4mrqa.reuse_head_with(mrc4mrqa) # match4mrqa.reuse_head_with(mrc4mrqa)
...@@ -41,59 +53,22 @@ if __name__ == '__main__': ...@@ -41,59 +53,22 @@ if __name__ == '__main__':
# output_vars = ernie.build(data_vars) # output_vars = ernie.build(data_vars)
# cls_head.build({'backbone': output_vars, 'reader': data_vars}) # cls_head.build({'backbone': output_vars, 'reader': data_vars})
loss_var = trainer.build_forward(ernie) loss_var = mh_trainer.build_forward(ernie, [cls_head, cls_head2])
# controller.build_forward()
# Error! a head/backbone can be only build once! Try NOT to call build_forward method for any Trainer!
print(trainer.num_examples) n_steps = cls_reader.num_examples * num_epochs // batch_size
iterator_fn = trainer.load_data(train_file, 'csv', num_epochs=num_epochs, batch_size=batch_size)
print(trainer.num_examples)
n_steps = trainer.num_examples * num_epochs // batch_size
warmup_steps = int(0.1 * n_steps) warmup_steps = int(0.1 * n_steps)
print(warmup_steps) print(warmup_steps)
sched = palm.lr_sched.TriangularSchedualer(warmup_steps, n_steps) sched = palm.lr_sched.TriangularSchedualer(warmup_steps, n_steps)
adam = palm.optimizer.Adam(loss_var, lr, sched) adam = palm.optimizer.Adam(loss_var, lr, sched)
trainer.build_backward(optimizer=adam, weight_decay=0.001) mh_trainer.build_backward(optimizer=adam, weight_decay=0.001)
trainer.random_init_params() # mh_trainer.random_init_params()
trainer.load_pretrain('pretrain/ernie/params') mh_trainer.load_pretrain('pretrain/ernie/params')
# print(trainer.train_one_step(next(iterator_fn()))) # trainer.train(iterator_fn, print_steps=1, save_steps=5, save_path='outputs', save_type='ckpt,predict')
# trainer.train_one_epoch() mh_trainer.fit_readers_with_mixratio([cls_reader, cls_reader2], 'cls', 2)
trainer.train(iterator_fn, print_steps=1, save_steps=5, save_path='outputs/ckpt') mh_trainer.train(print_steps=1)
# trainer.save() # trainer.save()
# controller = palm.Controller([mrqa, match4mrqa, mlm4mrqa])
# loss = controller.build_forward(bb, mask_task=[])
# n_steps = controller.estimate_train_steps(basetask=mrqa, num_epochs=2, batch_size=8, dev_count=4)
# adam = palm.optimizer.Adam(loss)
# sched = palm.schedualer.LinearWarmup(learning_rate, max_train_steps=n_steps, warmup_steps=0.1*n_steps)
#
# controller.build_backward(optimizer=adam, schedualer=sched, weight_decay=0.001, use_ema=True, ema_decay=0.999)
# controller.random_init_params()
# controller.load_pretrain('../../pretrain_model/ernie/params')
# controller.train()
# controller = palm.Controller(config='config.yaml', task_dir='tasks', for_train=False)
# controller.pred('mrqa', inference_model_dir='output_model/secondrun/mrqa/infer_model')
export CUDA_VISIBLE_DEVICES=0 export CUDA_VISIBLE_DEVICES=3
python run.py python run.py
label text_a
1 when was the last time the san antonio spurs missed the playoffshave only missed the playoffs four times since entering the NBA ; they have not missed the playoffs in the 20 seasons since Tim Duncan was drafted by the Spurs in 1997 . With their 50th win in the 2016 -- 17 season , the Spurs extended their record for most consecutive 50 - win seasons to 18 ( the Spurs did not
0 the creation of the federal reserve system was an attempt toReserve System ( also known as the Federal Reserve or simply the Fed ) is the central banking system of the United States of America . Over the years , events such as the Great Depression in the 1930s and the Great Recession during the 2000s have led to the expansion of the
2 group f / 64 was a major backlash against the earlier photographic movement off / 64 was formed , Edward Weston went to a meeting of the John Reed Club , which was founded to support Marxist artists and writers . These circumstances not only helped set up the situation in which a group
0 Bessarabia eventually became under the control of which country?city of Vilnius – its historical capital, which was under Polish control during the inter-war
0 Iran's inflation led to what in 1975-1976?the economy of Iran was flooded with foreign currency, which caused inflation. By 1974, the economy of Iran was experiencing double digit inflation, and despite many large projects to modernize the country, corruption was rampant and caused large
1 How many steam warships did Japan have in 1867?Yokosuka and Nagasaki. By the end of the Tokugawa shogunate in 1867, the Japanese navy of the shogun already possessed eight western-style steam warships around the flagship Kaiyō Maru, which were used against pro-imperial forces during the Boshin war, under the command
0 How many people were inside?f former NFL head coach Dan Reeves, suffered a broken back. DeCamillis was seen on a stretcher wearing a neck brace. A line of heavy thunderstorms was moving through the Dallas area at the time, he said, but no other damage to buildings was reported, said Mike Adams, a dispatcher for the Irving, Texas, fire department. Watch the roof collapse on players, coaches » Arnold Payne, a photographer for WFAA, was shooting the Cowboys' practice session when rain began falling "tremendously hard." "I noticed the walls started to waver ... and then I noticed that the lights that were hanging from the ceiling started to sway, and it wouldn't stop," Payne told CNN. Shortly after that, he said, "It was as if someone took a stick pin and hit a balloon." Watch Payne describe being inside when structure collpased » Payne said
0 Ishita Dutta is the sister of an actress who is typically cast in what genre of movies?he suspense thriller film "Drishyam" (2015) and the Hindi soap opera "Ek Ghar Banaunga", that aired on Star Plus. She is the younger sister of actress Tanushree Dutta. Dutta is the recipient of Femina Miss India Universe title in 2004. During the same year
3 when did the the civil war start and end/Th> </Tr> <Tr> <Td> <P> 110,000 + killed in action / died of wounds 230,000 + accident / disease deaths 25,000 -- 30,000 died in Confederate prisons </P> <P> 365,000 + total dead
1 What has Pakistan told phone companies?Islamabad, Pakistan (CNN) -- Under heavy criticism for a telling cell phone carriers to ban certain words in text messages, the Pakistan Telecommunication Authority went into damage control mode Wednesday. PTA spokesman Mohammed Younis Wednesday denied the existence of the plan, which has met with derision from mobile phone users in the country. "If at all we finally decide to
0 What did Bush say the proposal was to a proposal he vetoed before?(CNN) -- President Bush vetoed an expansion of the federally funded, state-run health insurance program for poor children for a second time Wednesday, telling Congress the bill "moves our country's health care system in the wrong direction." In his veto message, President Bush calls on Congress to extend funding for the current program. "Because the Congress has chosen to send me an essentially identical bill that has the same problems as the flawed bill I previously vetoed, I must veto this legislation, too," he said in a statement released by the White House. The bill would
0 Where did the football team that Bob Simmons coached from 1995 to 2000 play their home games?Cowboys football team [SEP] The 1998 Oklahoma State Cowboys football team represented the Oklahoma State University during the 1998 NCAA Division I-A football season. They participated as members of the Big 12 Conference in the South Division. They were coached by head coach Bob Simmons. [PAR] [TLE] Bob Simmons (American football coach) [SEP] Bob
2 What anniversary was recently celebrated in Iran?us to move our policy in a new direction," Obama said. "So there are going to be a set of objectives that we have in these conversations, but I think that there's the possibility at least of a relationship of mutual respect and progress." The United States and Iran have not had diplomatic relations since 1979. During that year, the Shah of Iran was forced to flee the country and the Ayatollah Khomeini took power. Later that year, Iranian students took over and seized hostages at the U.S. Embassy. Relations have been cut since then. U.S. President George W. Bush labeled Iran as a member of the "axis of evil" after the Sept. 11, 2001 attacks. Iran celebrated the 30th anniversary of the revolution Tuesday with crowds chanting "Death to America." Watch the parade in Iran » Tensions have rippled over issues such as Iran's nuclear program, Israel, and Iraq, and have been aggravated since the outspoken Ahmadinejad came to power in 2005. Western
1 Which Italian composer did George Balanchine add in 1976?[PAR] [TLE] Arcangelo Corelli [SEP] Arcangelo Corelli ( ; 17 February 1653 – 8 January 1713) was an Italian violinist and composer of the Baroque era. His music
0 Will the playstation 4 be announced?a new system sometime in the next five years, of course. Sony continued to sell the PlayStation 2 system and games years after the PlayStation 3 debuted in stores. For Sony's next console, the company will not deploy a streaming delivery system like OnLive, or fully cut out disc retailers like Best Buy and GameStop, Hirai said. While Sony has increased the number of games and other media available for download or streaming through its networks, most people cannot be expected to frequently download several gigabytes worth of data, which can be a time-consuming process, he said. Sony Computer Entertainment president Andrew House said earlier that Sony is not planning to discuss a new console, the website ComputerAndVideogames.com reported on Monday.
1 How many children were the Americans trying to kidnap out of Haiti?Port-au-Prince, Haiti (CNN) -- A Haitian attorney representing 10 Americans charged with kidnapping for trying to take 33 children out of Haiti told CNN Sunday he has resigned. Edwin Coq said he had quit as a lawyer for the Americans. It wasn't immediately clear who would replace him. "I know that they have been looking at other lawyers," said Phyllis Allison, mother of one of those detained, Jim Allen. "They don't know what to do." The 10 missionaries, including group leader Laura Silsby, were charged Thursday with kidnapping children and criminal association. Coq had said that court hearings would be held Monday
0 who kills tree gelbman in happy death dayTree convinces Carter of her predicament by showing that she holds foreknowledge of the day 's events . Tree admits to Carter she does n't like who
0 What will no person be denied the enjoyment of in Georgia based on their religious principles?amended as follows: "Article IV. Section 10. No person within this state shall, upon any pretense, be deprived of the inestimable privilege of worshipping God in any
0 who came up with the idea of footballpass . The popularity of college football grew as it became the dominant version of the sport in the United States for the first half of the 20th century . Bowl games , a college football tradition , attracted a national audience for college
0 what is the name of the female smurfbefore the smurflings created Sassette , Smurfette was the only female smurf in the Smurf Village .
3 Who contributed to the American studies programs at Yale and University of Wyoming?struggle. Norman Holmes Pearson, who worked for the Office of Strategic Studies in London during World War II, returned to Yale and headed the new American studies program, in which scholarship quickly became an instrument of promoting
0 What is the group's former name that now has an office with the Chief Actuary besides the Social Security Administration?Office of the Chief Actuary [SEP] The Office of the Chief Actuary is a government agency that has responsibility for actuarial estimates regarding social welfare programs. In Canada, the Office of the Chief Actuary works with the Canada Pension Plan and the Old Age Security Program. In the United States, both the Social Security Administration and the Centers for Medicare and Medicaid Services have an Office of the Chief Actuary that deals with Social Security and Medicare, respectively. A similar agency in the United Kingdom is called the Government Actuary's Department
0 The actor that playes Han Solo in the "Star Wars" film series stars with Blake Lively and Michiel Huisman in a film directed by who?about a woman who stops aging after an accident at the age of 29. Mills Goodloe and Salvador Paskowitz. The film stars Blake Lively, Michiel Huisman, Kathy Baker, Amanda Crew, Harrison Ford, and Ellen Burstyn. The film was theatrically released on April 24, 2015 by Lionsgate. [PAR] [TLE] Harrison Ford [SEP] Harrison Ford
0 What historically black university's men's basketball coach was formerly head coach at Virginia Tech?well as an 1890 Historically Black Land-Grant University. The University is a member-school of the Thurgood Marshall College Fund. He was also the head coach at Virginia Tech, Tennessee
0 what year did syracuse win the ncaa tournament. Their combined record is 67 -- 39 .
1 where do i get chips at a casino<P> Money is exchanged for tokens in a casino at the casino cage , at the gaming tables , or at a cashier station . The tokens are
0 when was the winter fuel payment first introducedheating over the winter months .
0 Trophy hunting can include areas which would likely be unsuitable for what other types of ecotourism?study states that less than 3% of a trophy hunters' expenditures reach the local level, meaning that the economic incentive and benefit is "minimal, particularly when we consider the vast areas of
1 In simple language, what are the interconnections in an embedding matrix?Since it was quite easy to stack interconnections (wires) inside the embedding matrix, the approach allowed designers to forget completely about the routing of wires (usually a time-consuming operation of PCB design): Anywhere the designer needs a connection, the machine will draw a wire in straight line from one location/pin
2 rho has been to the most all star games in baseballn4 </Li> <Li> Stan Musial 24 </Li>
0 In 1169, Ireland was invaded by which people?High King to ensure the terms of the Treaty of Windsor led Henry II, as King of England, to rule as effective monarch under the title of Lord of Ireland. This title was granted to his younger son but when Henry's heir unexpectedly died the title of King of England and Lord of Ireland became entwined in one
1 What year did a biracial Populist fusion gain the Governors office?to the legislature and governor's office, but the Populists attracted voters displeased with them. In 1896 a biracial, Populist-Republican Fusionist coalition gained the governor's office. The Democrats regained control of the legislature
1 nearest metro station to majnu ka tilla delhiRing Road of Delhi . It is at a walkable distance from ISBT Kashmere Gate . It is approachable through the Kashmeri Gate station of the Delhi Metro , lies on both the Red ( Dilshad Garden - Rithala ) and Yellow Lines ( Samaypur
3 where is california located in the region of the united states<P> California is a U.S. state in the Pacific Region of the United States . With 39.5 million residents , California is the most populous state in the United States and the third largest by area . The
1 when did the baptist church start in americacoworker for religious freedom , are variously credited as founding the earliest Baptist church in North America . In 1639 , Williams established a Baptist church in Providence , Rhode Island , and Clarke began a Baptist church in
0 where was the first capital of the united states locatedpassed to pave the way for a permanent capital . The decision to locate the capital was contentious , but Alexander Hamilton helped broker a compromise in which the federal government would take on war debt incurred during the American Revolutionary War , in exchange for support from northern states for locating the capital along the Potomac
0 What will new regulations will reduce?products off of the consumer market," said Michael Fry, director of conservation advocacy for the American Bird Conservancy. "By putting these restrictions in place, they are allowing a compromise to be made between themselves and organizations who have been working on this problem for a long time." The EPA's new measures, which were handed down Thursday, require that rat poisons be kept in bait stations above ground and in containers that meet agency standards. Loose bait, such as pellets, and the four most hazardous types of pesticides, known as "second-generation anticoagulants," will no longer be sold for personal use. Under the new restrictions, only farmers, livestock owners and certified rodent control employees will be allowed to purchase rat poison in bulk. Bags larger than 8 pounds will no longer be sold at hardware and home-improvement stores. Children who come into contact
0 who played lois lane in the man of steelmixture of toughness and vulnerability , but Peter Bradshaw thought that the character was `` sketchily conceived '' and criticized her lack of chemistry with Cavill . Even so , the film earned over $660 million to become one of her biggest box
0 What year did the writer of the 1968 novel "The Iron Man" become Poet Laurete?Giant is a 1999 American animated science-fiction comedy-drama action film using both traditional animation and computer animation, produced by and directed by Brad Bird in his directorial debut. It is based on the 1968 novel "The Iron Man" by Ted Hughes (which was published in the United States as "The Iron Giant") and was scripted by Tim McCanlies from a story treatment by Bird. The film stars the voices of Eli Marienthal,
2 The conquest of Nice was an effort by Suleiman and what French king?allies. A month prior to the siege of Nice, France supported the Ottomans with an artillery unit during the 1543 Ottoman conquest of Esztergom in northern Hungary. After further advances by the Turks, the Habsburg ruler Ferdinand officially recognized Ottoman ascendancy in Hungary in
0 when was the vaccine receivedfor swine flu, also known as 2009 H1N1, using reverse genetics, he said. "Suitable viruses will hopefully be sent to manufacturers by end of next week," Skinner wrote. Once that happens, vaccine makers will tweak the virus and have "pilot lots" of vaccine ready to be tested by mid- to late June. Several thousand cases have been reported
1 What is the nationality of the actor who costarred with Matt LeBlanc in "All the Queen's Men"?n approximate -99.92% return. [PAR] [TLE] Eddie Izzard [SEP] Edward John "Eddie" Izzard ( ; born 7 February 1962) is an English stand-up comedian, actor, writer and political activist. His comedic style takes the form of rambling, whimsical monologue, and self-referential pantomime. He
0 What sickened thousands of children?executives detained, a local official said, according to Xinhua, Initial tests showed more than 1,300 children in the Hunan province town of Wenping have excessive lead in their blood from the Wugang Manganese Smelting Plant. A second round of testing has been ordered to confirm the results. The plant opened in May 2008 without gaining the approval of the local environment protection bureau, said Huang Wenbin, a deputy environment chief in Wugang City, Xinhua reported. The plant was within 500 meters (about a quarter mile) of three schools. The
0 What percentage of the population are the Kpelle?are descendants of African American and West Indian, mostly Barbadian settlers, make up 2.5%. Congo people, descendants of repatriated Congo and Afro-Caribbean
1 Amount of people left homeless?86 dead, the state news agency said. About 30 people are missing, the official news agency Agencia Brasil said, citing civil defense officials. Earlier reports had indicated as many as 100 people were dead. In addition, more than 54,000 residents have been left homeless, and another 1.5 million have been affected by the heavy rains, the state news agency reported. Brazilian President Luiz Inacio Lula da Silva announced he will release nearly 700 million reais ($350 million)
2 What other countries were in disagreement with the United Nations decision on Burma ?that strongly called upon the government of Myanmar to end its systematic violations of human rights. In January 2007, Russia and China vetoed a
0 Besides Barcelona and Real Madrid, what other team has remained in the Primera Division?first football club to win six out of six competitions in a single year, completing the sextuple in also winning the Spanish Super Cup, UEFA Super Cup and FIFA Club World Cup. In 2011, the club became
0 William Frederick Truax, is a former professional American football tight end in the National Football League (NFL) from 1964 to 1973 for the Los Angeles Rams and the Dallas Cowboys, following the 1970 NFL season, Truax was traded by the Rams to the Cowboys for wide receiver Lance Rentzel, a former American football flanker, in which organization?in New Orleans and college football at Louisiana State University and was drafted in the second round of the 1964 NFL draft. Following the 1970 NFL season, Truax was traded by the Rams to the Cowboys for wide receiver Lance Rentzel. He was part of the Cowboys' Super Bowl VI championship team in 1971. He played
3 What year did Chopin learn that the uprising in Warsaw was crushed?enlist. Chopin, now alone in Vienna, was nostalgic for his homeland, and wrote to a friend, "I curse the moment of my departure." When in September 1831 he learned, while travelling from Vienna to Paris, that the uprising had been crushed, he expressed his anguish in the pages of his private journal: "Oh
1 where do they make money in washington dc; all coinage is produced by the United States Mint . With production facilities in Washington , DC , and Fort Worth , Texas , the Bureau of Engraving and Printing is the largest producer of government security documents in the United States . </P>
0 What did a researcher compare this process to?which makes it one of the highest rates of maternal mortality in the Americas. In wealthy developed nations, only nine women die for every 100,000 births. The five main causes of pregnancy-related deaths in Peru are hemorrhage, pre-eclampsia, infection, complications following abortion and obstructed birth, according to Peru's Ministry of Health figures. Amnesty's Peru researcher Nuria Garcia said, in a written statement: "The rates of maternal mortality in Peru are scandalous. The fact that so many women are dying from preventable causes is a human rights violation. "The Peruvian state is simply ignoring
0 How many containers can Longtan Containers Port Area handle?Port of Nanjing is the largest inland port in China, with annual cargo tonnage reached 191,970,000 t in 2012. The port area is 98 kilometres (61 mi) in length and has 64 berths
0 The 2011 New York City Marathon was sponsored by which Dutch multinational banking corporation?are retail banking, direct banking, commercial banking, investment banking, asset management, and insurance services. ING is an abbreviation for "Internationale Nederlanden Groep " (English: International Netherlands Group). [PAR] [TLE] 2011 New York City Marathon [SEP] The 42nd New York City Marathon took
0 What is human flourishing?it does not involve believing that human nature is purely good or that all people can live up to the Humanist ideals without help. If anything, there is recognition that living up to one's potential is hard
0 What was the result of Dida appealto play in next month's Champions League match at Shakhtar Donetsk after partially winning his appeal to UEFA against a two-match ban. Dida has had one game of his two-match ban suspended for a year following an appeal to UEFA. Brazilian Dida was also fined 60,000 Swiss francs by European football's ruling body following an incident involving a supporter during the Champions clash against Celtic in Scotland on October 3. The 34-year-old Brazilian was initially banned for two games for his theatrics following a Celtic fan's encroachment onto the pitch during the 2-1 defeat at Celtic
1 What is more plentiful in capital projects?generates economic distortion in the public sector by diverting public investment into capital projects where bribes and kickbacks are more plentiful. Officials may increase the technical complexity of public sector projects to conceal or
0 where were band greeted with cheers?the United States for a show in Stamford, Connecticut, on Tuesday, after they have "a few days off to recuperate," Robinson said. The trio was the opening act for Nelson until they were loudly booed in Toronto, a day after the actor-musician's bizarre interview with a CBC radio host. Ironically, the comments that offended Canadians included Thornton's assessment that they were "very reserved" and "it doesn't matter what you say to them." "It's mashed potatoes with no gravy," Thornton told CBC host Jian Ghomeshi. "We tend to play places where people throw things at each other and here they just sort of sit there," he said. Watch Thornton's interview » The audience at Thursday night's show in Toronto loudly booed the Boxmasters, with some shouts of "Here comes the gravy!" The Toronto Star newspaper reported. Thornton's remarks about
0 What do Mexicans call Mexico City?the Federal District in Spanish: D.F., which is read "De-Efe"). They are formally called capitalinos (in reference to the city being the capital of the country), but "[p]erhaps because capitalino is the
0 where does lock stock and barrel come fromindividual components one at a time . One craftsman made the `` lock '' which would have been a `` match lock '' , `` wheel lock '' , `` flint lock '' etc .
1 who has the power to establish a prison system<P> The Federal Bureau of Prisons ( BOP ) is a United States federal law enforcement agency . A subdivision of
0 what are south americas only 2 landlocked countriessuch countries , including five partially recognised states .
label text_a
1 when was the last time the san antonio spurs missed the playoffshave only missed the playoffs four times since entering the NBA ; they have not missed the playoffs in the 20 seasons since Tim Duncan was drafted by the Spurs in 1997 . With their 50th win in the 2016 -- 17 season , the Spurs extended their record for most consecutive 50 - win seasons to 18 ( the Spurs did not
0 the creation of the federal reserve system was an attempt toReserve System ( also known as the Federal Reserve or simply the Fed ) is the central banking system of the United States of America . Over the years , events such as the Great Depression in the 1930s and the Great Recession during the 2000s have led to the expansion of the
2 group f / 64 was a major backlash against the earlier photographic movement off / 64 was formed , Edward Weston went to a meeting of the John Reed Club , which was founded to support Marxist artists and writers . These circumstances not only helped set up the situation in which a group
0 Bessarabia eventually became under the control of which country?city of Vilnius – its historical capital, which was under Polish control during the inter-war
0 Iran's inflation led to what in 1975-1976?the economy of Iran was flooded with foreign currency, which caused inflation. By 1974, the economy of Iran was experiencing double digit inflation, and despite many large projects to modernize the country, corruption was rampant and caused large
1 How many steam warships did Japan have in 1867?Yokosuka and Nagasaki. By the end of the Tokugawa shogunate in 1867, the Japanese navy of the shogun already possessed eight western-style steam warships around the flagship Kaiyō Maru, which were used against pro-imperial forces during the Boshin war, under the command
0 How many people were inside?f former NFL head coach Dan Reeves, suffered a broken back. DeCamillis was seen on a stretcher wearing a neck brace. A line of heavy thunderstorms was moving through the Dallas area at the time, he said, but no other damage to buildings was reported, said Mike Adams, a dispatcher for the Irving, Texas, fire department. Watch the roof collapse on players, coaches » Arnold Payne, a photographer for WFAA, was shooting the Cowboys' practice session when rain began falling "tremendously hard." "I noticed the walls started to waver ... and then I noticed that the lights that were hanging from the ceiling started to sway, and it wouldn't stop," Payne told CNN. Shortly after that, he said, "It was as if someone took a stick pin and hit a balloon." Watch Payne describe being inside when structure collpased » Payne said
0 Ishita Dutta is the sister of an actress who is typically cast in what genre of movies?he suspense thriller film "Drishyam" (2015) and the Hindi soap opera "Ek Ghar Banaunga", that aired on Star Plus. She is the younger sister of actress Tanushree Dutta. Dutta is the recipient of Femina Miss India Universe title in 2004. During the same year
3 when did the the civil war start and end/Th> </Tr> <Tr> <Td> <P> 110,000 + killed in action / died of wounds 230,000 + accident / disease deaths 25,000 -- 30,000 died in Confederate prisons </P> <P> 365,000 + total dead
1 What has Pakistan told phone companies?Islamabad, Pakistan (CNN) -- Under heavy criticism for a telling cell phone carriers to ban certain words in text messages, the Pakistan Telecommunication Authority went into damage control mode Wednesday. PTA spokesman Mohammed Younis Wednesday denied the existence of the plan, which has met with derision from mobile phone users in the country. "If at all we finally decide to
0 What did Bush say the proposal was to a proposal he vetoed before?(CNN) -- President Bush vetoed an expansion of the federally funded, state-run health insurance program for poor children for a second time Wednesday, telling Congress the bill "moves our country's health care system in the wrong direction." In his veto message, President Bush calls on Congress to extend funding for the current program. "Because the Congress has chosen to send me an essentially identical bill that has the same problems as the flawed bill I previously vetoed, I must veto this legislation, too," he said in a statement released by the White House. The bill would
0 Where did the football team that Bob Simmons coached from 1995 to 2000 play their home games?Cowboys football team [SEP] The 1998 Oklahoma State Cowboys football team represented the Oklahoma State University during the 1998 NCAA Division I-A football season. They participated as members of the Big 12 Conference in the South Division. They were coached by head coach Bob Simmons. [PAR] [TLE] Bob Simmons (American football coach) [SEP] Bob
2 What anniversary was recently celebrated in Iran?us to move our policy in a new direction," Obama said. "So there are going to be a set of objectives that we have in these conversations, but I think that there's the possibility at least of a relationship of mutual respect and progress." The United States and Iran have not had diplomatic relations since 1979. During that year, the Shah of Iran was forced to flee the country and the Ayatollah Khomeini took power. Later that year, Iranian students took over and seized hostages at the U.S. Embassy. Relations have been cut since then. U.S. President George W. Bush labeled Iran as a member of the "axis of evil" after the Sept. 11, 2001 attacks. Iran celebrated the 30th anniversary of the revolution Tuesday with crowds chanting "Death to America." Watch the parade in Iran » Tensions have rippled over issues such as Iran's nuclear program, Israel, and Iraq, and have been aggravated since the outspoken Ahmadinejad came to power in 2005. Western
1 Which Italian composer did George Balanchine add in 1976?[PAR] [TLE] Arcangelo Corelli [SEP] Arcangelo Corelli ( ; 17 February 1653 – 8 January 1713) was an Italian violinist and composer of the Baroque era. His music
0 Will the playstation 4 be announced?a new system sometime in the next five years, of course. Sony continued to sell the PlayStation 2 system and games years after the PlayStation 3 debuted in stores. For Sony's next console, the company will not deploy a streaming delivery system like OnLive, or fully cut out disc retailers like Best Buy and GameStop, Hirai said. While Sony has increased the number of games and other media available for download or streaming through its networks, most people cannot be expected to frequently download several gigabytes worth of data, which can be a time-consuming process, he said. Sony Computer Entertainment president Andrew House said earlier that Sony is not planning to discuss a new console, the website ComputerAndVideogames.com reported on Monday.
1 How many children were the Americans trying to kidnap out of Haiti?Port-au-Prince, Haiti (CNN) -- A Haitian attorney representing 10 Americans charged with kidnapping for trying to take 33 children out of Haiti told CNN Sunday he has resigned. Edwin Coq said he had quit as a lawyer for the Americans. It wasn't immediately clear who would replace him. "I know that they have been looking at other lawyers," said Phyllis Allison, mother of one of those detained, Jim Allen. "They don't know what to do." The 10 missionaries, including group leader Laura Silsby, were charged Thursday with kidnapping children and criminal association. Coq had said that court hearings would be held Monday
0 who kills tree gelbman in happy death dayTree convinces Carter of her predicament by showing that she holds foreknowledge of the day 's events . Tree admits to Carter she does n't like who
0 What will no person be denied the enjoyment of in Georgia based on their religious principles?amended as follows: "Article IV. Section 10. No person within this state shall, upon any pretense, be deprived of the inestimable privilege of worshipping God in any
0 who came up with the idea of footballpass . The popularity of college football grew as it became the dominant version of the sport in the United States for the first half of the 20th century . Bowl games , a college football tradition , attracted a national audience for college
0 what is the name of the female smurfbefore the smurflings created Sassette , Smurfette was the only female smurf in the Smurf Village .
3 Who contributed to the American studies programs at Yale and University of Wyoming?struggle. Norman Holmes Pearson, who worked for the Office of Strategic Studies in London during World War II, returned to Yale and headed the new American studies program, in which scholarship quickly became an instrument of promoting
0 What is the group's former name that now has an office with the Chief Actuary besides the Social Security Administration?Office of the Chief Actuary [SEP] The Office of the Chief Actuary is a government agency that has responsibility for actuarial estimates regarding social welfare programs. In Canada, the Office of the Chief Actuary works with the Canada Pension Plan and the Old Age Security Program. In the United States, both the Social Security Administration and the Centers for Medicare and Medicaid Services have an Office of the Chief Actuary that deals with Social Security and Medicare, respectively. A similar agency in the United Kingdom is called the Government Actuary's Department
0 The actor that playes Han Solo in the "Star Wars" film series stars with Blake Lively and Michiel Huisman in a film directed by who?about a woman who stops aging after an accident at the age of 29. Mills Goodloe and Salvador Paskowitz. The film stars Blake Lively, Michiel Huisman, Kathy Baker, Amanda Crew, Harrison Ford, and Ellen Burstyn. The film was theatrically released on April 24, 2015 by Lionsgate. [PAR] [TLE] Harrison Ford [SEP] Harrison Ford
0 What historically black university's men's basketball coach was formerly head coach at Virginia Tech?well as an 1890 Historically Black Land-Grant University. The University is a member-school of the Thurgood Marshall College Fund. He was also the head coach at Virginia Tech, Tennessee
0 what year did syracuse win the ncaa tournament. Their combined record is 67 -- 39 .
1 where do i get chips at a casino<P> Money is exchanged for tokens in a casino at the casino cage , at the gaming tables , or at a cashier station . The tokens are
0 when was the winter fuel payment first introducedheating over the winter months .
0 Trophy hunting can include areas which would likely be unsuitable for what other types of ecotourism?study states that less than 3% of a trophy hunters' expenditures reach the local level, meaning that the economic incentive and benefit is "minimal, particularly when we consider the vast areas of
1 In simple language, what are the interconnections in an embedding matrix?Since it was quite easy to stack interconnections (wires) inside the embedding matrix, the approach allowed designers to forget completely about the routing of wires (usually a time-consuming operation of PCB design): Anywhere the designer needs a connection, the machine will draw a wire in straight line from one location/pin
2 rho has been to the most all star games in baseballn4 </Li> <Li> Stan Musial 24 </Li>
0 In 1169, Ireland was invaded by which people?High King to ensure the terms of the Treaty of Windsor led Henry II, as King of England, to rule as effective monarch under the title of Lord of Ireland. This title was granted to his younger son but when Henry's heir unexpectedly died the title of King of England and Lord of Ireland became entwined in one
1 What year did a biracial Populist fusion gain the Governors office?to the legislature and governor's office, but the Populists attracted voters displeased with them. In 1896 a biracial, Populist-Republican Fusionist coalition gained the governor's office. The Democrats regained control of the legislature
1 nearest metro station to majnu ka tilla delhiRing Road of Delhi . It is at a walkable distance from ISBT Kashmere Gate . It is approachable through the Kashmeri Gate station of the Delhi Metro , lies on both the Red ( Dilshad Garden - Rithala ) and Yellow Lines ( Samaypur
3 where is california located in the region of the united states<P> California is a U.S. state in the Pacific Region of the United States . With 39.5 million residents , California is the most populous state in the United States and the third largest by area . The
1 when did the baptist church start in americacoworker for religious freedom , are variously credited as founding the earliest Baptist church in North America . In 1639 , Williams established a Baptist church in Providence , Rhode Island , and Clarke began a Baptist church in
0 where was the first capital of the united states locatedpassed to pave the way for a permanent capital . The decision to locate the capital was contentious , but Alexander Hamilton helped broker a compromise in which the federal government would take on war debt incurred during the American Revolutionary War , in exchange for support from northern states for locating the capital along the Potomac
0 What will new regulations will reduce?products off of the consumer market," said Michael Fry, director of conservation advocacy for the American Bird Conservancy. "By putting these restrictions in place, they are allowing a compromise to be made between themselves and organizations who have been working on this problem for a long time." The EPA's new measures, which were handed down Thursday, require that rat poisons be kept in bait stations above ground and in containers that meet agency standards. Loose bait, such as pellets, and the four most hazardous types of pesticides, known as "second-generation anticoagulants," will no longer be sold for personal use. Under the new restrictions, only farmers, livestock owners and certified rodent control employees will be allowed to purchase rat poison in bulk. Bags larger than 8 pounds will no longer be sold at hardware and home-improvement stores. Children who come into contact
0 who played lois lane in the man of steelmixture of toughness and vulnerability , but Peter Bradshaw thought that the character was `` sketchily conceived '' and criticized her lack of chemistry with Cavill . Even so , the film earned over $660 million to become one of her biggest box
0 What year did the writer of the 1968 novel "The Iron Man" become Poet Laurete?Giant is a 1999 American animated science-fiction comedy-drama action film using both traditional animation and computer animation, produced by and directed by Brad Bird in his directorial debut. It is based on the 1968 novel "The Iron Man" by Ted Hughes (which was published in the United States as "The Iron Giant") and was scripted by Tim McCanlies from a story treatment by Bird. The film stars the voices of Eli Marienthal,
2 The conquest of Nice was an effort by Suleiman and what French king?allies. A month prior to the siege of Nice, France supported the Ottomans with an artillery unit during the 1543 Ottoman conquest of Esztergom in northern Hungary. After further advances by the Turks, the Habsburg ruler Ferdinand officially recognized Ottoman ascendancy in Hungary in
0 when was the vaccine receivedfor swine flu, also known as 2009 H1N1, using reverse genetics, he said. "Suitable viruses will hopefully be sent to manufacturers by end of next week," Skinner wrote. Once that happens, vaccine makers will tweak the virus and have "pilot lots" of vaccine ready to be tested by mid- to late June. Several thousand cases have been reported
1 What is the nationality of the actor who costarred with Matt LeBlanc in "All the Queen's Men"?n approximate -99.92% return. [PAR] [TLE] Eddie Izzard [SEP] Edward John "Eddie" Izzard ( ; born 7 February 1962) is an English stand-up comedian, actor, writer and political activist. His comedic style takes the form of rambling, whimsical monologue, and self-referential pantomime. He
0 What sickened thousands of children?executives detained, a local official said, according to Xinhua, Initial tests showed more than 1,300 children in the Hunan province town of Wenping have excessive lead in their blood from the Wugang Manganese Smelting Plant. A second round of testing has been ordered to confirm the results. The plant opened in May 2008 without gaining the approval of the local environment protection bureau, said Huang Wenbin, a deputy environment chief in Wugang City, Xinhua reported. The plant was within 500 meters (about a quarter mile) of three schools. The
0 What percentage of the population are the Kpelle?are descendants of African American and West Indian, mostly Barbadian settlers, make up 2.5%. Congo people, descendants of repatriated Congo and Afro-Caribbean
1 Amount of people left homeless?86 dead, the state news agency said. About 30 people are missing, the official news agency Agencia Brasil said, citing civil defense officials. Earlier reports had indicated as many as 100 people were dead. In addition, more than 54,000 residents have been left homeless, and another 1.5 million have been affected by the heavy rains, the state news agency reported. Brazilian President Luiz Inacio Lula da Silva announced he will release nearly 700 million reais ($350 million)
2 What other countries were in disagreement with the United Nations decision on Burma ?that strongly called upon the government of Myanmar to end its systematic violations of human rights. In January 2007, Russia and China vetoed a
0 Besides Barcelona and Real Madrid, what other team has remained in the Primera Division?first football club to win six out of six competitions in a single year, completing the sextuple in also winning the Spanish Super Cup, UEFA Super Cup and FIFA Club World Cup. In 2011, the club became
0 William Frederick Truax, is a former professional American football tight end in the National Football League (NFL) from 1964 to 1973 for the Los Angeles Rams and the Dallas Cowboys, following the 1970 NFL season, Truax was traded by the Rams to the Cowboys for wide receiver Lance Rentzel, a former American football flanker, in which organization?in New Orleans and college football at Louisiana State University and was drafted in the second round of the 1964 NFL draft. Following the 1970 NFL season, Truax was traded by the Rams to the Cowboys for wide receiver Lance Rentzel. He was part of the Cowboys' Super Bowl VI championship team in 1971. He played
3 What year did Chopin learn that the uprising in Warsaw was crushed?enlist. Chopin, now alone in Vienna, was nostalgic for his homeland, and wrote to a friend, "I curse the moment of my departure." When in September 1831 he learned, while travelling from Vienna to Paris, that the uprising had been crushed, he expressed his anguish in the pages of his private journal: "Oh
1 where do they make money in washington dc; all coinage is produced by the United States Mint . With production facilities in Washington , DC , and Fort Worth , Texas , the Bureau of Engraving and Printing is the largest producer of government security documents in the United States . </P>
0 What did a researcher compare this process to?which makes it one of the highest rates of maternal mortality in the Americas. In wealthy developed nations, only nine women die for every 100,000 births. The five main causes of pregnancy-related deaths in Peru are hemorrhage, pre-eclampsia, infection, complications following abortion and obstructed birth, according to Peru's Ministry of Health figures. Amnesty's Peru researcher Nuria Garcia said, in a written statement: "The rates of maternal mortality in Peru are scandalous. The fact that so many women are dying from preventable causes is a human rights violation. "The Peruvian state is simply ignoring
0 How many containers can Longtan Containers Port Area handle?Port of Nanjing is the largest inland port in China, with annual cargo tonnage reached 191,970,000 t in 2012. The port area is 98 kilometres (61 mi) in length and has 64 berths
0 The 2011 New York City Marathon was sponsored by which Dutch multinational banking corporation?are retail banking, direct banking, commercial banking, investment banking, asset management, and insurance services. ING is an abbreviation for "Internationale Nederlanden Groep " (English: International Netherlands Group). [PAR] [TLE] 2011 New York City Marathon [SEP] The 42nd New York City Marathon took
0 What is human flourishing?it does not involve believing that human nature is purely good or that all people can live up to the Humanist ideals without help. If anything, there is recognition that living up to one's potential is hard
0 What was the result of Dida appealto play in next month's Champions League match at Shakhtar Donetsk after partially winning his appeal to UEFA against a two-match ban. Dida has had one game of his two-match ban suspended for a year following an appeal to UEFA. Brazilian Dida was also fined 60,000 Swiss francs by European football's ruling body following an incident involving a supporter during the Champions clash against Celtic in Scotland on October 3. The 34-year-old Brazilian was initially banned for two games for his theatrics following a Celtic fan's encroachment onto the pitch during the 2-1 defeat at Celtic
1 What is more plentiful in capital projects?generates economic distortion in the public sector by diverting public investment into capital projects where bribes and kickbacks are more plentiful. Officials may increase the technical complexity of public sector projects to conceal or
0 where were band greeted with cheers?the United States for a show in Stamford, Connecticut, on Tuesday, after they have "a few days off to recuperate," Robinson said. The trio was the opening act for Nelson until they were loudly booed in Toronto, a day after the actor-musician's bizarre interview with a CBC radio host. Ironically, the comments that offended Canadians included Thornton's assessment that they were "very reserved" and "it doesn't matter what you say to them." "It's mashed potatoes with no gravy," Thornton told CBC host Jian Ghomeshi. "We tend to play places where people throw things at each other and here they just sort of sit there," he said. Watch Thornton's interview » The audience at Thursday night's show in Toronto loudly booed the Boxmasters, with some shouts of "Here comes the gravy!" The Toronto Star newspaper reported. Thornton's remarks about
0 What do Mexicans call Mexico City?the Federal District in Spanish: D.F., which is read "De-Efe"). They are formally called capitalinos (in reference to the city being the capital of the country), but "[p]erhaps because capitalino is the
0 where does lock stock and barrel come fromindividual components one at a time . One craftsman made the `` lock '' which would have been a `` match lock '' , `` wheel lock '' , `` flint lock '' etc .
1 who has the power to establish a prison system<P> The Federal Bureau of Prisons ( BOP ) is a United States federal law enforcement agency . A subdivision of
0 what are south americas only 2 landlocked countriessuch countries , including five partially recognised states .
# coding=utf-8
import paddlepalm as palm
import json
if __name__ == '__main__':
max_seqlen = 256
batch_size = 8
num_epochs = 10
lr = 5e-5
vocab_path = './pretrain/ernie-ch-uncased-base/vocab.txt'
train_file = './data/chnsenticorp/train.tsv'
predict_file = './data/chnsenticorp/test.tsv'
random_seed = 1
config = json.load(open('./pretrain/ernie-ch-uncased-base/ernie_config.json'))
# ernie = palm.backbone.ERNIE(...)
ernie = palm.backbone.ERNIE.from_config(config)
# cls_reader2 = palm.reader.cls(train_file_topic, vocab_path, batch_size, max_seqlen)
# cls_reader3 = palm.reader.cls(train_file_subj, vocab_path, batch_size, max_seqlen)
# topic_trainer = palm.Trainer('topic_cls', cls_reader2, cls)
# subj_trainer = palm.Trainer('subj_cls', cls_reader3, cls)
# 创建该分类任务的reader,由诸多参数控制数据集读入格式、文件数量、预处理规则等
cls_reader = palm.reader.ClassifyReader(vocab_path, max_seqlen, seed=random_seed)
predict_cls_reader = palm.reader.ClassifyReader(vocab_path, max_seqlen, seed=random_seed, phase='predict')
# 不同的backbone会对任务reader有不同的特征要求,例如对于分类任务,基本的输入feature为token_ids和label_ids,但是对于BERT,还要求从输入中额外提取position、segment、input_mask等特征,因此经过register后,reader会自动补充backbone所要求的字段
cls_reader.register_with(ernie)
print("preparing data...")
print(cls_reader.num_examples)
cls_reader.load_data(train_file, batch_size, num_epochs=num_epochs)
print(cls_reader.num_examples)
print('done!')
input_dim = config['hidden_size']
num_classes = 2
dropout_prob = 0.1
random_seed = 1
# 创建任务头(task head),如分类、匹配、机器阅读理解等。每个任务头有跟该任务相关的必选/可选参数。注意,任务头与reader是解耦合的,只要任务头依赖的数据集侧的字段能被reader提供,那么就是合法的
cls_head = palm.head.Classify(num_classes, input_dim, dropout_prob)
# 根据reader和任务头来创建一个训练器trainer,trainer代表了一个训练任务,内部维护着训练进程、和任务的关键信息,并完成合法性校验,该任务的模型保存、载入等相关规则控制
trainer = palm.Trainer('senti_cls')
loss_var = trainer.build_forward(ernie, cls_head)
n_steps = cls_reader.num_examples * num_epochs // batch_size
# warmup_steps = int(0.1 * n_steps)
# print(warmup_steps)
"""
# sched = palm.lr_sched.TriangularSchedualer(warmup_steps, n_steps)
sched = None
adam = palm.optimizer.Adam(loss_var, lr, sched)
trainer.build_backward(optimizer=adam, weight_decay=0.01)
trainer.random_init_params()
trainer.load_pretrain('pretrain/ernie-ch-uncased-base/params')
trainer.fit_reader(cls_reader)
trainer.train(print_steps=1, save_steps=n_steps-24, save_path='outputs', save_type='ckpt')
"""
model_path = './outputs/ckpt.step'+str(n_steps-24)
print('prepare to predict...')
pred_ernie = palm.backbone.ERNIE.from_config(config, phase='predict')
predict_cls_reader.register_with(pred_ernie)
predict_cls_reader.load_data(predict_file, 8)
cls_pred_head = palm.head.Classify(num_classes, input_dim, phase='predict')
trainer.build_predict_forward(pred_ernie, cls_pred_head)
pred_ckpt = trainer.load_ckpt(model_path, phase='predict')
trainer.fit_reader(predict_cls_reader, phase='predict')
print(predict_cls_reader.num_examples)
print('predicting..')
trainer.predict(print_steps=20, output_dir="outputs/test/")
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册