diff --git a/PaddleNLP/PALM b/PaddleNLP/PALM
deleted file mode 160000
index 5426f75073cf5bd416622dbe71b146d3dc8fffb6..0000000000000000000000000000000000000000
--- a/PaddleNLP/PALM
+++ /dev/null
@@ -1 +0,0 @@
-Subproject commit 5426f75073cf5bd416622dbe71b146d3dc8fffb6
diff --git a/PaddleNLP/PaddleLARK/ERNIE b/PaddleNLP/PaddleLARK/ERNIE
deleted file mode 160000
index 30b892e3c029bff706337f269e6c158b0a223f60..0000000000000000000000000000000000000000
--- a/PaddleNLP/PaddleLARK/ERNIE
+++ /dev/null
@@ -1 +0,0 @@
-Subproject commit 30b892e3c029bff706337f269e6c158b0a223f60
diff --git a/PaddleNLP/README.md b/PaddleNLP/README.md
index aa26b2ecfc27444bdb63f588250a2fb93e2081cf..ecd21b29a796aff8568b4505849e307956e8faf4 100644
--- a/PaddleNLP/README.md
+++ b/PaddleNLP/README.md
@@ -10,7 +10,7 @@
 
 - **丰富而全面的NLP任务支持:**
 
-  - PaddleNLP为您提供了多粒度,多场景的应用支持。涵盖了从[分词](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/lexical_analysis),[词性标注](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/lexical_analysis),[命名实体识别](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/lexical_analysis)等NLP基础技术,到[文本分类](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/sentiment_classification),[文本相似度计算](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/similarity_net),[语义表示](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleLARK),[文本生成](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleTextGEN)等NLP核心技术。同时,PaddleNLP还提供了针对常见NLP大型应用系统(如[阅读理解](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleMRC),[对话系统](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleDialogue),[机器翻译系统](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleMT)等)的特定核心技术和工具组件,模型和预训练参数等,让您在NLP领域畅通无阻。
+  - PaddleNLP为您提供了多粒度,多场景的应用支持。涵盖了从[分词](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/lexical_analysis),[词性标注](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/lexical_analysis),[命名实体识别](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/lexical_analysis)等NLP基础技术,到[文本分类](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/sentiment_classification),[文本相似度计算](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/similarity_net),[语义表示](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/pretrain_langauge_models),[文本生成](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/seq2seq)等NLP核心技术。同时,PaddleNLP还提供了针对常见NLP大型应用系统(如[阅读理解](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/machine_reading_comprehension),[对话系统](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/dialogue_system),[机器翻译系统](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/machine_translation)等)的特定核心技术和工具组件,模型和预训练参数等,让您在NLP领域畅通无阻。
 
 - **稳定可靠的NLP模型和强大的预训练参数:**
 
@@ -55,11 +55,11 @@ cd models/PaddleNLP/sentiment_classification
 |                    **语言模型**                    | [Language_model](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/language_model) | 基于循环神经网络(RNN)的经典神经语言模型(neural language model)。 |
 |                 **情感分类**:fire:                 | [Senta](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/sentiment_classification),[EmotionDetection](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/emotion_detection) | Senta(Sentiment Classification,简称Senta)和EmotionDetection两个项目分别提供了面向*通用场景*和*人机对话场景专用*的情感倾向性分析模型。 |
 |              **文本相似度计算**:fire:              | [SimNet](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/similarity_net) | SimNet,又称为Similarity Net,为您提供高效可靠的文本相似度计算工具和预训练模型。 |
-|                 **语义表示**:fire:                 | [PaddleLARK](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleLARK) | PaddleLARK,全称为Paddle LAngauge Representation Toolkit,集成了ELMO,BERT,ERNIE 1.0,ERNIE 2.0,XLNet等热门中英文预训练模型。 |
-|                    **文本生成**                    | [PaddleTextGEN](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleTextGEN) | Paddle Text Generation为您提供了一些列经典文本生成模型案例,如vanilla seq2seq,seq2seq with attention,variational seq2seq模型等。 |
-|                    **阅读理解**                    | [PaddleMRC](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleMRC) | PaddleMRC,全称为Paddle Machine Reading Comprehension,集合了百度在阅读理解领域相关的模型,工具,开源数据等一系列工作。包括DuReader (百度开源的基于真实搜索用户行为的中文大规模阅读理解数据集),KT-Net (结合知识的阅读理解模型,SQuAD以及ReCoRD曾排名第一), D-Net (预训练-微调框架,在EMNLP2019 MRQA国际阅读理解评测获得第一),等。 |
-|                    **对话系统**                    | [PaddleDialogue](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleDialogue) | 包括:1)DGU(Dialogue General Understanding,通用对话理解模型)覆盖了包括**检索式聊天系统**中context-response matching任务和**任务完成型对话系统**中**意图识别**,**槽位解析**,**状态追踪**等常见对话系统任务,在6项国际公开数据集中都获得了最佳效果。
 2) knowledge-driven dialogue:百度开源的知识驱动的开放领域对话数据集,发表于ACL2019。
3)ADEM(Auto Dialogue Evaluation Model):对话自动评估模型,可用于自动评估不同对话生成模型的回复质量。 |
-|                    **机器翻译**                    | [PaddleMT](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleMT) | 全称为Paddle Machine Translation,基于Transformer的经典机器翻译模型。 |
+|                 **语义表示**:fire:                 | [pretrain_langauge_models](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/pretrain_langauge_models) | 集成了ELMO,BERT,ERNIE 1.0,ERNIE 2.0,XLNet等热门中英文预训练模型。 |
+|                    **文本生成**                    | [seq2seq](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleTextGEN) | seq2seq为您提供了一些列经典文本生成模型案例,如vanilla seq2seq,seq2seq with attention,variational seq2seq模型等。 |
+|                    **阅读理解**                    | [machine_reading_comprehension](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/machine_reading_comprehension) | Paddle Machine Reading Comprehension,集合了百度在阅读理解领域相关的模型,工具,开源数据等一系列工作。包括DuReader (百度开源的基于真实搜索用户行为的中文大规模阅读理解数据集),KT-Net (结合知识的阅读理解模型,SQuAD以及ReCoRD曾排名第一), D-Net (预训练-微调框架,在EMNLP2019 MRQA国际阅读理解评测获得第一),等。 |
+|                    **对话系统**                    | [dialogue_system](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/dialogue_system) | 包括:1)DGU(Dialogue General Understanding,通用对话理解模型)覆盖了包括**检索式聊天系统**中context-response matching任务和**任务完成型对话系统**中**意图识别**,**槽位解析**,**状态追踪**等常见对话系统任务,在6项国际公开数据集中都获得了最佳效果。
 2) knowledge-driven dialogue:百度开源的知识驱动的开放领域对话数据集,发表于ACL2019。
3)ADEM(Auto Dialogue Evaluation Model):对话自动评估模型,可用于自动评估不同对话生成模型的回复质量。 |
+|                    **机器翻译**                    | [machine_translation](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/machine_translation) | 全称为Paddle Machine Translation,基于Transformer的经典机器翻译模型。 |
 |                  **其他前沿工作**                  | [Research](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/Research) |                    百度最新前沿工作开源。                    |
 
 
@@ -70,13 +70,13 @@ cd models/PaddleNLP/sentiment_classification
 ```text
 .
 ├── Research                          # 百度NLP在research方面的工作集合
-├── PaddleMT                          # 机器翻译相关代码,数据,预训练模型
-├── PaddleDialogue                    # 对话系统相关代码,数据,预训练模型
-├── PaddleMRC                         # 阅读理解相关代码,数据,预训练模型
-├── PaddleLARK                        # 语言表示工具箱
+├── machine_translation               # 机器翻译相关代码,数据,预训练模型
+├── dialogue_system                   # 对话系统相关代码,数据,预训练模型
+├── machcine_reading_comprehension    # 阅读理解相关代码,数据,预训练模型
+├── pretrain_langauge_models          # 语言表示工具箱
 ├── language_model                    # 语言模型
 ├── lexical_analysis                  # LAC词法分析
-├── models                            # 共享网络
+├── shared_modules/models             # 共享网络
 │   ├── __init__.py
 │   ├── classification
 │   ├── dialogue_model_toolkit
@@ -87,7 +87,7 @@ cd models/PaddleNLP/sentiment_classification
 │   ├── representation
 │   ├── sequence_labeling
 │   └── transformer_encoder.py
-├── preprocess                        # 共享文本预处理工具
+├── shared_modules/preprocess        # 共享文本预处理工具
 │   ├── __init__.py
 │   ├── ernie
 │   ├── padding.py
diff --git a/PaddleNLP/dialogue_domain_classification/run_classifier.py b/PaddleNLP/dialogue_domain_classification/run_classifier.py
index 0fc9107413088ae444505fade3c779a38c9331bb..af22ffbf091b157b29b9386d00220322026c0032 100755
--- a/PaddleNLP/dialogue_domain_classification/run_classifier.py
+++ b/PaddleNLP/dialogue_domain_classification/run_classifier.py
@@ -16,7 +16,6 @@
 # limitations under the License.
 """
 
-
 from __future__ import absolute_import
 from __future__ import division
 from __future__ import print_function
@@ -40,43 +39,55 @@ import math
 np.random.seed(0)
 random.seed(0)
 
-
 parser = argparse.ArgumentParser(__doc__)
 DEV_COUNT = 1
 model_g = ArgumentGroup(parser, "model", "model configuration and paths.")
-model_g.add_arg("init_checkpoint", str, None, "Init checkpoint to resume training from.")
-model_g.add_arg("checkpoints", str, "./checkpoints", "Path to save checkpoints.")
+model_g.add_arg("init_checkpoint", str, None,
+                "Init checkpoint to resume training from.")
+model_g.add_arg("checkpoints", str, "./checkpoints",
+                "Path to save checkpoints.")
 model_g.add_arg("config_path", str, "./data/input/model.conf", "Model conf.")
 model_g.add_arg("build_dict", bool, False, "Build dict.")
 
 train_g = ArgumentGroup(parser, "training", "training options.")
 train_g.add_arg("cpu_num", int, 3, "Number of Threads.")
 train_g.add_arg("epoch", int, 100, "Number of epoches for training.")
-train_g.add_arg("learning_rate", float, 0.1, "Learning rate used to train with warmup.")
-train_g.add_arg("save_steps", int, 1000, "The steps interval to save checkpoints.")
-train_g.add_arg("validation_steps", int, 100, "The steps interval to evaluate model performance.")
+train_g.add_arg("learning_rate", float, 0.1,
+                "Learning rate used to train with warmup.")
+train_g.add_arg("save_steps", int, 1000,
+                "The steps interval to save checkpoints.")
+train_g.add_arg("validation_steps", int, 100,
+                "The steps interval to evaluate model performance.")
 train_g.add_arg("random_seed", int, 7, "random seed")
-train_g.add_arg("threshold", float, 0.1, "When the confidence exceeds the threshold, the corresponding label is given.")
+train_g.add_arg(
+    "threshold", float, 0.1,
+    "When the confidence exceeds the threshold, the corresponding label is given."
+)
 
 log_g = ArgumentGroup(parser, "logging", "logging related.")
 log_g.add_arg("skip_steps", int, 10, "The steps interval to print loss.")
 
-data_g = ArgumentGroup(parser, "data", "Data paths, vocab paths and data processing options")
+data_g = ArgumentGroup(parser, "data",
+                       "Data paths, vocab paths and data processing options")
 data_g.add_arg("data_dir", str, "./data/input/", "Path to training data.")
 data_g.add_arg("save_dir", str, "./data/output/", "Path to save.")
-data_g.add_arg("max_seq_len", int, 50, "Tokens' number of the longest seqence allowed.")
-data_g.add_arg("batch_size", int, 64, "The total number of examples in one batch for training.")
+data_g.add_arg("max_seq_len", int, 50,
+               "Tokens' number of the longest seqence allowed.")
+data_g.add_arg("batch_size", int, 64,
+               "The total number of examples in one batch for training.")
 
 run_type_g = ArgumentGroup(parser, "run_type", "running type options.")
 run_type_g.add_arg("use_cuda", bool, False, "If set, use GPU for training.")
 # run_type_g.add_arg("use_fast_executor", bool, False, "If set, use fast parallel executor (in experiment).")
-run_type_g.add_arg("do_train", bool, True, "Whether to perform evaluation on test data set.")
-run_type_g.add_arg("do_eval", bool, True, "Whether to perform evaluation on test data set.")
-run_type_g.add_arg("do_test", bool, True, "Whether to perform evaluation on test data set.")
+run_type_g.add_arg("do_train", bool, True,
+                   "Whether to perform evaluation on test data set.")
+run_type_g.add_arg("do_eval", bool, True,
+                   "Whether to perform evaluation on test data set.")
+run_type_g.add_arg("do_test", bool, True,
+                   "Whether to perform evaluation on test data set.")
 args = parser.parse_args()
 
 
-
 def get_score(pred_result, label, eval_phase):
     """[get precision recall and f-score]
     
@@ -93,7 +104,7 @@ def get_score(pred_result, label, eval_phase):
         total += 1
         pred_labels = []
         actual_labels = []
-        for j in range(1, len(pred_result[0])): # the 0 one is background
+        for j in range(1, len(pred_result[0])):  # the 0 one is background
             if pred_result[i][j] == 1:
                 pred_labels.append(j)
             if label[i][j] == 1:
@@ -106,12 +117,12 @@ def get_score(pred_result, label, eval_phase):
                 tp += 1
                 true_cnt += 1
         elif len(pred_labels) == 0 and len(actual_labels) == 0:
-            true_cnt += 1   
+            true_cnt += 1
     try:
         precision = tp * 1.0 / pred_pos_num
         recall = tp * 1.0 / pos_num
         f1 = 2 * precision * recall / (recall + precision)
-    except Exception as  e:
+    except Exception as e:
         precision = 0
         recall = 0
         f1 = 0
@@ -139,7 +150,7 @@ def train(args, train_exe, build_res, place):
     pred_label = build_res["pred_label"]
     label = build_res["label"]
     fetch_list = [cost.name, prediction.name, pred_label.name, label.name]
-    train_pyreader = build_res["train_pyreader"]
+    train_data_loader = build_res["train_data_loader"]
     train_prog = build_res["train_prog"]
     steps = 0
     time_begin = time.time()
@@ -147,22 +158,24 @@ def train(args, train_exe, build_res, place):
     logger.info("Begin training")
     for i in range(args.epoch):
         try:
-            for data in train_pyreader(): 
+            for data in train_data_loader():
                 avg_cost_np, avg_pred_np, pred_label, label = train_exe.run(feed=data, program=compiled_prog, \
                                                                             fetch_list=fetch_list)
                 steps += 1
                 if steps % int(args.skip_steps) == 0:
                     time_end = time.time()
                     used_time = time_end - time_begin
-                    get_score(pred_label, label, eval_phase = "Train")
+                    get_score(pred_label, label, eval_phase="Train")
                     logger.info('loss is {}'.format(avg_cost_np))
-                    logger.info("epoch: %d, step: %d, speed: %f steps/s" % (i, steps, args.skip_steps / used_time))
+                    logger.info("epoch: %d, step: %d, speed: %f steps/s" %
+                                (i, steps, args.skip_steps / used_time))
                     time_begin = time.time()
                 if steps % args.save_steps == 0:
                     save_path = os.path.join(args.checkpoints,
-                        "step_" + str(steps))
-                    fluid.io.save_persistables(train_exe, save_path, train_prog)
-                    logger.info("[save]step %d : save at %s" % (steps, save_path))
+                                             "step_" + str(steps))
+                    fluid.io.save(train_prog, save_path)
+                    logger.info("[save]step %d : save at %s" %
+                                (steps, save_path))
                 if steps % args.validation_steps == 0:
                     if args.do_eval:
                         evaluate(args, test_exe, build_res, "eval")
@@ -173,11 +186,16 @@ def train(args, train_exe, build_res, place):
             logger.error("Train error : %s" % str(e))
             exit(1)
     save_path = os.path.join(args.checkpoints, "step_" + str(steps))
-    fluid.io.save_persistables(train_exe, save_path, train_prog)
+    fluid.io.save(train_prog, save_path)
     logger.info("[save]step %d : save at %s" % (steps, save_path))
 
 
-def evaluate(args, test_exe, build_res, eval_phase, save_result=False, id2intent=None):
+def evaluate(args,
+             test_exe,
+             build_res,
+             eval_phase,
+             save_result=False,
+             id2intent=None):
     """[evaluate on dev/test dataset]
     
     Arguments:
@@ -193,7 +211,7 @@ def evaluate(args, test_exe, build_res, eval_phase, save_result=False, id2intent
         save_result {bool} -- [description] (default: {False})
         id2intent {[type]} -- [description] (default: {None})
     """
-    place = build_res["test_place"] 
+    place = build_res["test_place"]
     threshold = args.threshold
     cost = build_res["cost"]
     prediction = build_res["prediction"]
@@ -203,29 +221,34 @@ def evaluate(args, test_exe, build_res, eval_phase, save_result=False, id2intent
     total_cost, total_acc, pred_prob_list, pred_label_list, label_list = [], [], [], [], []
     if eval_phase == "eval":
         test_prog = build_res["eval_compiled_prog"]
-        test_pyreader = build_res["eval_pyreader"]
+        test_data_loader = build_res["eval_data_loader"]
     elif eval_phase == "test":
         test_prog = build_res["test_compiled_prog"]
-        test_pyreader = build_res["test_pyreader"]
+        test_data_loader = build_res["test_data_loader"]
     else:
         exit(1)
     logger.info("-----------------------------------------------------------")
-    for data in test_pyreader():
+    for data in test_data_loader():
         avg_cost_np, avg_pred_np, pred_label, label= test_exe.run(program=test_prog, fetch_list=fetch_list, feed=data, \
             return_numpy=True)
         total_cost.append(avg_cost_np)
         pred_prob_list.extend(avg_pred_np)
         pred_label_list.extend(pred_label)
         label_list.extend(label)
-           
+
     if save_result:
-        logger.info("save result at : %s" % args.save_dir + "/" + eval_phase + ".rst")
+        logger.info("save result at : %s" % args.save_dir + "/" + eval_phase +
+                    ".rst")
         save_dir = args.save_dir
         if not os.path.exists(save_dir):
             logger.warning("save dir not exists, and create it")
             os.makedirs(save_dir)
-        fin = codecs.open(os.path.join(args.data_dir, eval_phase + ".txt"), "r", encoding="utf8")
-        fout = codecs.open(args.save_dir + "/" + eval_phase + ".rst", "w", encoding="utf8")
+        fin = codecs.open(
+            os.path.join(args.data_dir, eval_phase + ".txt"),
+            "r",
+            encoding="utf8")
+        fout = codecs.open(
+            args.save_dir + "/" + eval_phase + ".rst", "w", encoding="utf8")
         for line in pred_prob_list:
             query = fin.readline().rsplit("\t", 1)[0]
             res = []
@@ -236,18 +259,23 @@ def evaluate(args, test_exe, build_res, eval_phase, save_result=False, id2intent
             if len(res) == 0:
                 res.append(id2intent[0])
             fout.write("%s\t%s\n" % (query, "\2".join(sorted(res))))
-        fout.close() 
+        fout.close()
         fin.close()
-    
+
     logger.info("[%s] result: " % eval_phase)
     get_score(pred_label_list, label_list, eval_phase)
     logger.info('loss is {}'.format(sum(total_cost) * 1.0 / len(total_cost)))
     logger.info("-----------------------------------------------------------")
 
 
-
-def create_net(args, flow_data, class_dim, dict_dim, place, model_name="textcnn_net", is_infer=False):
-    """[create network and pyreader]
+def create_net(args,
+               flow_data,
+               class_dim,
+               dict_dim,
+               place,
+               model_name="textcnn_net",
+               is_infer=False):
+    """[create network and loader]
     
     Arguments:
         flow_data {[type]} -- [description]
@@ -266,29 +294,42 @@ def create_net(args, flow_data, class_dim, dict_dim, place, model_name="textcnn_
         model = textcnn_net_multi_label
     else:
         return
-    char_list = fluid.data(name="char", shape=[None, args.max_seq_len, 1], dtype="int64", lod_level=0)
-    label = fluid.data(name="label", shape=[None, class_dim], dtype="float32", lod_level=0)  # label data
-    reader = fluid.io.PyReader(feed_list=[char_list, label], capacity=args.batch_size * 10, iterable=True, \
-                                return_list=False)
-    output = model(char_list, label, dict_dim,
-                emb_dim=flow_data["model"]["emb_dim"],
-                hid_dim=flow_data["model"]["hid_dim"],
-                hid_dim2=flow_data["model"]["hid_dim2"],
-                class_dim=class_dim,
-                win_sizes=flow_data["model"]["win_sizes"],
-                is_infer=is_infer,
-                threshold=args.threshold,
-                max_seq_len=args.max_seq_len)
+    char_list = fluid.data(
+        name="char",
+        shape=[None, args.max_seq_len, 1],
+        dtype="int64",
+        lod_level=0)
+    label = fluid.data(
+        name="label", shape=[None, class_dim], dtype="float32",
+        lod_level=0)  # label data
+    data_loader = fluid.io.DataLoader.from_generator(
+        feed_list=[char_list, label],
+        capacity=args.batch_size * 10,
+        iterable=True,
+        return_list=False)
+    output = model(
+        char_list,
+        label,
+        dict_dim,
+        emb_dim=flow_data["model"]["emb_dim"],
+        hid_dim=flow_data["model"]["hid_dim"],
+        hid_dim2=flow_data["model"]["hid_dim2"],
+        class_dim=class_dim,
+        win_sizes=flow_data["model"]["win_sizes"],
+        is_infer=is_infer,
+        threshold=args.threshold,
+        max_seq_len=args.max_seq_len)
     if is_infer:
         prediction = output
-        return [reader, prediction]
+        return [data_loader, prediction]
     else:
-        avg_cost, prediction, pred_label, label = output[0], output[1], output[2], output[3]
-        return [reader, avg_cost, prediction, pred_label, label]
-        
+        avg_cost, prediction, pred_label, label = output[0], output[1], output[
+            2], output[3]
+        return [data_loader, avg_cost, prediction, pred_label, label]
 
-def build_data_reader(args, char_dict, intent_dict):
-    """[decorate samples for pyreader]
+
+def build_data_loader(args, char_dict, intent_dict):
+    """[decorate samples for dataloader]
     
     Arguments:
         args {[type]} -- [description]
@@ -298,20 +339,22 @@ def build_data_reader(args, char_dict, intent_dict):
     Returns:
         [type] -- [description]
     """
-    reader_res = {}
+    loader_res = {}
     if args.do_train:
         train_processor = DataReader(char_dict, intent_dict, args.max_seq_len)
         train_data_generator = train_processor.prepare_data(
             data_path=args.data_dir + "train.txt",
             batch_size=args.batch_size,
             mode='train')
-        reader_res["train_data_generator"] = train_data_generator
+        loader_res["train_data_generator"] = train_data_generator
         num_train_examples = train_processor._get_num_examples()
         logger.info("Num train examples: %d" % num_train_examples)
         logger.info("Num train steps: %d" % (math.ceil(num_train_examples * 1.0 / args.batch_size) * \
                                             args.epoch // DEV_COUNT))
-        if math.ceil(num_train_examples * 1.0 / args.batch_size) // DEV_COUNT <= 0:
-            logger.error("Num of train steps is less than 0  or equals to 0, exit")
+        if math.ceil(num_train_examples * 1.0 /
+                     args.batch_size) // DEV_COUNT <= 0:
+            logger.error(
+                "Num of train steps is less than 0  or equals to 0, exit")
             exit(1)
     if args.do_eval:
         eval_processor = DataReader(char_dict, intent_dict, args.max_seq_len)
@@ -319,7 +362,7 @@ def build_data_reader(args, char_dict, intent_dict):
             data_path=args.data_dir + "eval.txt",
             batch_size=args.batch_size,
             mode='eval')
-        reader_res["eval_data_generator"] = eval_data_generator
+        loader_res["eval_data_generator"] = eval_data_generator
         num_eval_examples = eval_processor._get_num_examples()
         logger.info("Num eval examples: %d" % num_eval_examples)
     if args.do_test:
@@ -328,11 +371,12 @@ def build_data_reader(args, char_dict, intent_dict):
             data_path=args.data_dir + "test.txt",
             batch_size=args.batch_size,
             mode='test')
-        reader_res["test_data_generator"] = test_data_generator
-    return reader_res
+        loader_res["test_data_generator"] = test_data_generator
+    return loader_res
 
 
-def build_graph(args, model_config, num_labels, dict_dim, place, test_place, reader_res):
+def build_graph(args, model_config, num_labels, dict_dim, place, test_place,
+                loader_res):
     """[build paddle graph]
     
     Arguments:
@@ -341,7 +385,7 @@ def build_graph(args, model_config, num_labels, dict_dim, place, test_place, rea
         num_labels {[type]} -- [description]
         dict_dim {[type]} -- [description]
         place {[type]} -- [description]
-        reader_res {[type]} -- [description]
+        loader_res {[type]} -- [description]
     
     Returns:
         [type] -- [description]
@@ -349,7 +393,7 @@ def build_graph(args, model_config, num_labels, dict_dim, place, test_place, rea
     res = {}
     cost, prediction, pred_label, label = None, None, None, None
     train_prog = fluid.default_main_program()
-    
+
     startup_prog = fluid.default_startup_program()
     eval_prog = train_prog.clone(for_test=True)
     test_prog = train_prog.clone(for_test=True)
@@ -358,36 +402,42 @@ def build_graph(args, model_config, num_labels, dict_dim, place, test_place, rea
     if args.do_train:
         with fluid.program_guard(train_prog, startup_prog):
             with fluid.unique_name.guard():
-                train_pyreader, cost, prediction, pred_label, label = create_net(args, model_config, num_labels, \
+                train_data_loader, cost, prediction, pred_label, label = create_net(args, model_config, num_labels, \
                                                             dict_dim, place, model_name="textcnn_net")
-                train_pyreader.decorate_sample_list_generator(reader_res['train_data_generator'], places=place)
-                res["train_pyreader"] = train_pyreader
-                sgd_optimizer = fluid.optimizer.SGD(learning_rate=fluid.layers.exponential_decay(
-                                learning_rate=args.learning_rate, decay_steps=1000, decay_rate=0.5, staircase=True))
+                train_data_loader.set_sample_list_generator(
+                    loader_res['train_data_generator'], places=place)
+                res["train_data_loader"] = train_data_loader
+                sgd_optimizer = fluid.optimizer.SGD(
+                    learning_rate=fluid.layers.exponential_decay(
+                        learning_rate=args.learning_rate,
+                        decay_steps=1000,
+                        decay_rate=0.5,
+                        staircase=True))
                 sgd_optimizer.minimize(cost)
     if args.do_eval:
         with fluid.program_guard(eval_prog, startup_prog):
             with fluid.unique_name.guard():
-                eval_pyreader, cost, prediction, pred_label, label = create_net(args, model_config, num_labels, \
+                eval_data_loader, cost, prediction, pred_label, label = create_net(args, model_config, num_labels, \
                                                              dict_dim, test_place, model_name="textcnn_net")
-                eval_pyreader.decorate_sample_list_generator(reader_res['eval_data_generator'], places=test_place)
-                res["eval_pyreader"] = eval_pyreader
+                eval_data_loader.set_sample_list_generator(
+                    loader_res['eval_data_generator'], places=test_place)
+                res["eval_data_loader"] = eval_data_loader
     if args.do_test:
         with fluid.program_guard(test_prog, startup_prog):
             with fluid.unique_name.guard():
-                test_pyreader, cost, prediction, pred_label, label = create_net(args, model_config, num_labels, \
+                test_data_loader, cost, prediction, pred_label, label = create_net(args, model_config, num_labels, \
                                                             dict_dim, test_place, model_name="textcnn_net")
-                test_pyreader.decorate_sample_list_generator(reader_res['test_data_generator'], places=test_place)
-                res["test_pyreader"] = test_pyreader
+                test_data_loader.set_sample_list_generator(
+                    loader_res['test_data_generator'], places=test_place)
+                res["test_data_loader"] = test_data_loader
     res["cost"] = cost
     res["prediction"] = prediction
     res["label"] = label
     res["pred_label"] = pred_label
-    res["train_prog"] =train_prog 
+    res["train_prog"] = train_prog
     res["eval_prog"] = eval_prog
     res["test_prog"] = test_prog
 
-  
     return res
 
 
@@ -421,22 +471,25 @@ def main(args):
         id2intent[int(value)] = key
     num_labels = len(intent_dict)
     # build model
-    reader_res = build_data_reader(args, char_dict, intent_dict)
-    build_res = build_graph(args, model_config, num_labels, dict_dim, place, test_place, reader_res)
+    loader_res = build_data_loader(args, char_dict, intent_dict)
+    build_res = build_graph(args, model_config, num_labels, dict_dim, place,
+                            test_place, loader_res)
     build_res["place"] = place
     build_res["test_place"] = test_place
     if not (args.do_train or args.do_eval or args.do_test):
         raise ValueError("For args `do_train`, `do_eval` and `do_test`, at "
                          "least one of them must be True.")
-        
+
     exe.run(startup_prog)
     if args.init_checkpoint and args.init_checkpoint != "None":
         try:
-            init_checkpoint(exe, args.init_checkpoint, main_program=startup_prog)
+            init_checkpoint(
+                exe, args.init_checkpoint, main_program=startup_prog)
             logger.info("Load model from %s" % args.init_checkpoint)
         except Exception as e:
             logger.exception(str(e))
-            logger.error("Faild load model from %s [%s]" % (args.init_checkpoint, str(e)))
+            logger.error("Faild load model from %s [%s]" %
+                         (args.init_checkpoint, str(e)))
     build_strategy = fluid.compiler.BuildStrategy()
     build_strategy.fuse_all_reduce_ops = False
     exec_strategy = fluid.ExecutionStrategy()
@@ -449,22 +502,23 @@ def main(args):
                                                                     exec_strategy=exec_strategy)
         build_res["compiled_prog"] = compiled_prog
     if args.do_test:
-        test_compiled_prog = fluid.compiler.CompiledProgram(build_res["test_prog"])
+        test_compiled_prog = fluid.compiler.CompiledProgram(build_res[
+            "test_prog"])
         build_res["test_compiled_prog"] = test_compiled_prog
     if args.do_eval:
-        eval_compiled_prog = fluid.compiler.CompiledProgram(build_res["eval_prog"])
+        eval_compiled_prog = fluid.compiler.CompiledProgram(build_res[
+            "eval_prog"])
         build_res["eval_compiled_prog"] = eval_compiled_prog
 
     if args.do_train:
-       train(args, exe, build_res, place)
+        train(args, exe, build_res, place)
     if args.do_eval:
-       evaluate(args, exe, build_res, "eval", \
-                save_result=True, id2intent=id2intent)
-    if args.do_test:
-       evaluate(args, exe, build_res, "test",\
+        evaluate(args, exe, build_res, "eval", \
                  save_result=True, id2intent=id2intent)
+    if args.do_test:
+        evaluate(args, exe, build_res, "test",\
+                  save_result=True, id2intent=id2intent)
 
-        
 
 if __name__ == "__main__":
     logger.info("the paddle version is %s" % paddle.__version__)
diff --git a/PaddleNLP/dialogue_domain_classification/utils.py b/PaddleNLP/dialogue_domain_classification/utils.py
index 2c839a2ccc605fae3c602f241586fda2838fea15..ac32225abb9435f87f6c382b0a9a42461e1cbc36 100755
--- a/PaddleNLP/dialogue_domain_classification/utils.py
+++ b/PaddleNLP/dialogue_domain_classification/utils.py
@@ -32,14 +32,13 @@ try:
 except ImportError:
     import ConfigParser as cp
 
-
 random_seed = 7
 logger = logging.getLogger()
 format = "%(asctime)s - %(name)s - %(levelname)s -%(filename)s-%(lineno)4d -%(message)s"
 # format = "%(levelname)8s: %(asctime)s: %(filename)s:%(lineno)4d %(message)s"
 logging.basicConfig(format=format)
 logger.setLevel(logging.INFO)
-logger = logging.getLogger('Paddle-DDC') 
+logger = logging.getLogger('Paddle-DDC')
 
 
 def str2bool(v):
@@ -77,6 +76,7 @@ class ArgumentGroup(object):
     Arguments:
         object {[type]} -- [description]
     """
+
     def __init__(self, parser, title, des):
         self._group = parser.add_argument_group(title=title, description=des)
 
@@ -107,6 +107,7 @@ class DataReader(object):
     Returns:
         [type] -- [description]
     """
+
     def __init__(self, char_vocab, intent_dict, max_len):
         self._char_vocab = char_vocab
         self._intent_dict = intent_dict
@@ -115,10 +116,10 @@ class DataReader(object):
         self.all_data = []
         self.max_len = max_len
         self.padding_id = 0
-    
+
     def _get_num_examples(self):
         return len(self.all_data)
-    
+
     def prepare_data(self, data_path, batch_size, mode):
         """
         prepare data
@@ -128,12 +129,17 @@ class DataReader(object):
         #     word_dict_path), "The given word dictionary dose not exist."
         assert os.path.exists(data_path), "The given data file does not exist."
         if mode == "train":
-            train_reader = fluid.io.batch(paddle.reader.shuffle(self.data_reader(data_path, self.max_len, shuffle=True),
-                                        buf_size=batch_size * 100), batch_size)
+            train_reader = fluid.io.batch(
+                paddle.reader.shuffle(
+                    self.data_reader(
+                        data_path, self.max_len, shuffle=True),
+                    buf_size=batch_size * 100),
+                batch_size)
             # train_reader = fluid.io.batch(self.data_reader(data_path), batch_size)                   
             return train_reader
         else:
-            test_reader = fluid.io.batch(self.data_reader(data_path, self.max_len), batch_size)
+            test_reader = fluid.io.batch(
+                self.data_reader(data_path, self.max_len), batch_size)
             return test_reader
 
     def data_reader(self, file_path, max_len, shuffle=False):
@@ -141,7 +147,7 @@ class DataReader(object):
         Convert query into id list
         use fixed voc
         """
-        
+
         for line in codecs.open(file_path, "r", encoding="utf8"):
             line = line.strip()
             if isinstance(line, six.binary_type):
@@ -150,7 +156,8 @@ class DataReader(object):
             char_id_list = list(map(lambda x: 0 if x not in self._char_vocab else int(self._char_vocab[x]), \
                             list(query)))
             if len(char_id_list) < max_len:
-                char_id_list.extend([self.padding_id] * (max_len - len(char_id_list)))
+                char_id_list.extend([self.padding_id] *
+                                    (max_len - len(char_id_list)))
             char_id_list = char_id_list[:max_len]
             intent_id_list = [self.padding_id] * self.intent_size
             for item in intent.split('\2'):
@@ -159,6 +166,7 @@ class DataReader(object):
         if shuffle:
             random.seed(random_seed)
             random.shuffle(self.all_data)
+
         def reader():
             """
             reader
@@ -166,6 +174,7 @@ class DataReader(object):
             for char_id_list, intent_id_list in self.all_data:
                 # print char_id_list, intent_id
                 yield char_id_list, intent_id_list
+
         return reader
 
 
@@ -178,6 +187,7 @@ class DataProcesser(object):
     Returns:
         [type] -- [description]
     """
+
     @staticmethod
     def read_dict(filename):
         """
@@ -211,7 +221,7 @@ class DataProcesser(object):
         char_dict = {}
         intent_dict = {}
         # readfile
-        for line in codecs.open(filename): 
+        for line in codecs.open(filename):
             line = line.strip()
             if isinstance(line, six.binary_type):
                 line = line.strip().decode("utf8", errors="ignore")
@@ -227,7 +237,8 @@ class DataProcesser(object):
                     intent_dict[intent] = 0
                 intent_dict[intent] += 1
         #   save char dict
-        with codecs.open("%s/char.dict" % save_dir, "w", encoding="utf8") as f_out:
+        with codecs.open(
+                "%s/char.dict" % save_dir, "w", encoding="utf8") as f_out:
             f_out.write("PAD\0020\n")
             f_out.write("OOV\0021\n")
             char_id = 2
@@ -238,7 +249,8 @@ class DataProcesser(object):
                     f_out.write("%s\002%d\n" % (key, char_id))
                     char_id += 1
         #   save intent dict
-        with codecs.open("%s/domain.dict" % save_dir, "w", encoding="utf8") as f_out:
+        with codecs.open(
+                "%s/domain.dict" % save_dir, "w", encoding="utf8") as f_out:
             f_out.write("SYS_OTHER\0020\n")
             intent_id = 1
             for key, value in intent_dict.items():
@@ -247,7 +259,6 @@ class DataProcesser(object):
                         key = key.encode("utf8")
                     f_out.write("%s\002%d\n" % (key, intent_id))
                     intent_id += 1
-        
 
 
 class ConfigReader(object):
@@ -282,49 +293,13 @@ class ConfigReader(object):
         return flow_data
 
 
-def init_pretraining_params(exe,
-                            pretraining_params_path,
-                            main_program,
-                            use_fp16=False):
-    """load params of pretrained model, NOT including moment, learning_rate"""
-    assert os.path.exists(pretraining_params_path
-                          ), "[%s] cann't be found." % pretraining_params_path
-
-    def _existed_params(var):
-        if not isinstance(var, fluid.framework.Parameter):
-            return False
-        return os.path.exists(os.path.join(pretraining_params_path, var.name))
-
-    fluid.io.load_vars(
-        exe,
-        pretraining_params_path,
-        main_program=main_program,
-        predicate=_existed_params)
-    print("Load pretraining parameters from {}.".format(
-        pretraining_params_path))
-
-
 def init_checkpoint(exe, init_checkpoint_path, main_program):
     """
     Init CheckPoint
     """
-    assert os.path.exists(
-        init_checkpoint_path), "[%s] cann't be found." % init_checkpoint_path
+    fluid.load(main_program, init_checkpoint_path, exe)
+    print("Load model from {}".format(init_checkpoint_path))
 
-    def existed_persitables(var):
-        """
-        If existed presitabels
-        """
-        if not fluid.io.is_persistable(var):
-            return False
-        return os.path.exists(os.path.join(init_checkpoint_path, var.name))
-
-    fluid.io.load_vars(
-        exe,
-        init_checkpoint_path,
-        main_program=main_program,
-        predicate=existed_persitables)
-    print ("Load model from {}".format(init_checkpoint_path))
 
 def print_arguments(args):
     """
@@ -350,5 +325,3 @@ def check_version(version='1.6.0'):
     except Exception as e:
         logger.error(err)
         sys.exit(1)
-
-
diff --git a/PaddleNLP/PaddleDialogue/README.md b/PaddleNLP/dialogue_system/README.md
similarity index 100%
rename from PaddleNLP/PaddleDialogue/README.md
rename to PaddleNLP/dialogue_system/README.md
diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/.run_ce.sh b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/.run_ce.sh
similarity index 100%
rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/.run_ce.sh
rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/.run_ce.sh
diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/README.md b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/README.md
similarity index 99%
rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/README.md
rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/README.md
index b668351b7cf69b803f4b12c2bead54843b86c9ed..94059e992966d66f222ef5c44201bcc7957621a6 100644
--- a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/README.md
+++ b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/README.md
@@ -468,7 +468,7 @@ python -u main.py \
       --loss_type="CLS"
 ```
 #### windows环境下:
-评估: 
+评估:
 ```
 python -u main.py --do_eval=true --use_cuda=false --evaluation_file=data\input\data\unlabel_data\test.ids --output_prediction_file=data\output\pretrain_matching_predict --loss_type=CLS
 ```
diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/_ce.py b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/_ce.py
similarity index 87%
rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/_ce.py
rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/_ce.py
index e29b5aa9e18c06cb7fdab33e59c5f2b9ed32db41..121a2c836e716808dce89ed3e648a2ec05fc41af 100644
--- a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/_ce.py
+++ b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/_ce.py
@@ -21,14 +21,16 @@ from kpi import DurationKpi
 
 train_loss_card1 = CostKpi('train_loss_card1', 0.03, 0, actived=True)
 train_loss_card4 = CostKpi('train_loss_card4', 0.03, 0, actived=True)
-train_duration_card1 = DurationKpi('train_duration_card1', 0.01, 0, actived=True)
-train_duration_card4 = DurationKpi('train_duration_card4', 0.01, 0, actived=True)
+train_duration_card1 = DurationKpi(
+    'train_duration_card1', 0.01, 0, actived=True)
+train_duration_card4 = DurationKpi(
+    'train_duration_card4', 0.01, 0, actived=True)
 
 tracking_kpis = [
-        train_loss_card1,
-        train_loss_card4,
-        train_duration_card1,
-        train_duration_card4,
+    train_loss_card1,
+    train_loss_card4,
+    train_duration_card1,
+    train_duration_card4,
 ]
 
 
diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/__init__.py b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/__init__.py
similarity index 100%
rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/__init__.py
rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/__init__.py
diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/evaluate.py b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/evaluate.py
similarity index 100%
rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/evaluate.py
rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/evaluate.py
diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/prepare_data_and_model.py b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/prepare_data_and_model.py
similarity index 66%
rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/prepare_data_and_model.py
rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/prepare_data_and_model.py
index bc1dce21fd2b08225ac44e88b1ccf18c7e9b85ee..539a1a1febf351870a70e73ceb598ed81b720066 100644
--- a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/prepare_data_and_model.py
+++ b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/prepare_data_and_model.py
@@ -20,48 +20,52 @@ import sys
 import io
 import os
 
-URLLIB=urllib
-if sys.version_info >= (3, 0): 
+URLLIB = urllib
+if sys.version_info >= (3, 0):
     import urllib.request
-    URLLIB=urllib.request
+    URLLIB = urllib.request
 
-DATA_MODEL_PATH = {"DATA_PATH": "https://baidu-nlp.bj.bcebos.com/auto_dialogue_evaluation_dataset-1.0.0.tar.gz", 
-                   "TRAINED_MODEL": "https://baidu-nlp.bj.bcebos.com/auto_dialogue_evaluation_models.2.0.0.tar.gz"} 
+DATA_MODEL_PATH = {
+    "DATA_PATH":
+    "https://baidu-nlp.bj.bcebos.com/auto_dialogue_evaluation_dataset-1.0.0.tar.gz",
+    "TRAINED_MODEL":
+    "https://baidu-nlp.bj.bcebos.com/auto_dialogue_evaluation_models.2.0.0.tar.gz"
+}
 
-PATH_MAP = {'DATA_PATH': "./data/input", 
-            'TRAINED_MODEL': './data/saved_models'}
+PATH_MAP = {'DATA_PATH': "./data/input", 'TRAINED_MODEL': './data/saved_models'}
 
 
-def un_tar(tar_name, dir_name): 
-    try: 
+def un_tar(tar_name, dir_name):
+    try:
         t = tarfile.open(tar_name)
-        t.extractall(path = dir_name)
+        t.extractall(path=dir_name)
         return True
     except Exception as e:
         print(e)
         return False
 
 
-def download_model_and_data(): 
+def download_model_and_data():
     print("Downloading ade data, pretrain model and trained models......")
     print("This process is quite long, please wait patiently............")
-    for path in ['./data/input/data', './data/saved_models/trained_models']: 
-        if not os.path.exists(path): 
+    for path in ['./data/input/data', './data/saved_models/trained_models']:
+        if not os.path.exists(path):
             continue
         shutil.rmtree(path)
-    for path_key in DATA_MODEL_PATH: 
+    for path_key in DATA_MODEL_PATH:
         filename = os.path.basename(DATA_MODEL_PATH[path_key])
-        URLLIB.urlretrieve(DATA_MODEL_PATH[path_key], os.path.join("./", filename))
+        URLLIB.urlretrieve(DATA_MODEL_PATH[path_key],
+                           os.path.join("./", filename))
         state = un_tar(filename, PATH_MAP[path_key])
-        if not state: 
+        if not state:
             print("Tar %s error....." % path_key)
             return False
         os.remove(filename)
     return True
 
 
-if __name__ == "__main__": 
+if __name__ == "__main__":
     state = download_model_and_data()
-    if not state: 
+    if not state:
         exit(1)
     print("Downloading data and models sucess......")
diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/reader.py b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/reader.py
similarity index 100%
rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/reader.py
rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/reader.py
diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/utils/__init__.py b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/utils/__init__.py
similarity index 100%
rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/utils/__init__.py
rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/utils/__init__.py
diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/utils/configure.py b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/utils/configure.py
similarity index 100%
rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/utils/configure.py
rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/utils/configure.py
diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/utils/input_field.py b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/utils/input_field.py
similarity index 93%
rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/utils/input_field.py
rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/utils/input_field.py
index 6af31643d2587d0135b878f6f1f187b496955290..e90c7ff0376f4b9936c970fcafacd0ff01a303f4 100644
--- a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/utils/input_field.py
+++ b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/utils/input_field.py
@@ -25,8 +25,8 @@ import numpy as np
 import paddle.fluid as fluid
 
 
-class InputField(object): 
-    def __init__(self, input_field): 
+class InputField(object):
+    def __init__(self, input_field):
         """init inpit field"""
         self.context_wordseq = input_field[0]
         self.response_wordseq = input_field[1]
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/utils/model_check.py b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/utils/model_check.py
similarity index 99%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/utils/model_check.py
rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/utils/model_check.py
index 013130cbb0a9f4d44edc28589bf83672b69abf26..dacf1a668238a22d94fbefb9f52b05620581cad2 100644
--- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/utils/model_check.py
+++ b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/utils/model_check.py
@@ -30,7 +30,7 @@ def check_cuda(use_cuda, err = \
 
 
 if __name__ == "__main__":
-    
+
     check_cuda(True)
 
     check_cuda(False)
diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/utils/save_load_io.py b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/utils/save_load_io.py
similarity index 98%
rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/utils/save_load_io.py
rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/utils/save_load_io.py
index a992aec97881ed9834f614b50da30dddcc677858..bdcdd811dd1ebb27542e5facc3ba050f00df08f8 100644
--- a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/utils/save_load_io.py
+++ b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/utils/save_load_io.py
@@ -69,8 +69,8 @@ def init_from_checkpoint(args, exe, program):
 def init_from_params(args, exe, program):
 
     assert isinstance(args.init_from_params, str)
-    
-    if not os.path.exists(args.init_from_params): 
+
+    if not os.path.exists(args.init_from_params):
         raise Warning("the params path does not exist.")
         return False
 
@@ -122,5 +122,3 @@ def save_param(args, exe, program, dirname):
     print("save parameters at %s" % (os.path.join(param_dir, dirname)))
 
     return True
-
-
diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade_net.py b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade_net.py
similarity index 76%
rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade_net.py
rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade_net.py
index 8907b798cf6e4457954c89997e7ad9a84fcbe61e..10db91859a3774b3bf24266d18ebf244850a3b24 100755
--- a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade_net.py
+++ b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade_net.py
@@ -21,14 +21,13 @@ import paddle
 import paddle.fluid as fluid
 
 
-def create_net(
-    is_training,
-    model_input,
-    args, 
-    clip_value=10.0,
-    word_emb_name="shared_word_emb",
-    lstm_W_name="shared_lstm_W",
-    lstm_bias_name="shared_lstm_bias"): 
+def create_net(is_training,
+               model_input,
+               args,
+               clip_value=10.0,
+               word_emb_name="shared_word_emb",
+               lstm_W_name="shared_lstm_W",
+               lstm_bias_name="shared_lstm_bias"):
 
     context_wordseq = model_input.context_wordseq
     response_wordseq = model_input.response_wordseq
@@ -52,17 +51,15 @@ def create_net(
             initializer=fluid.initializer.Normal(scale=0.1)))
 
     #fc to fit dynamic LSTM
-    context_fc = fluid.layers.fc(
-        input=context_emb,
-        size=args.hidden_size * 4,
-        param_attr=fluid.ParamAttr(name='fc_weight'),
-        bias_attr=fluid.ParamAttr(name='fc_bias'))
+    context_fc = fluid.layers.fc(input=context_emb,
+                                 size=args.hidden_size * 4,
+                                 param_attr=fluid.ParamAttr(name='fc_weight'),
+                                 bias_attr=fluid.ParamAttr(name='fc_bias'))
 
-    response_fc = fluid.layers.fc(
-        input=response_emb,
-        size=args.hidden_size * 4,
-        param_attr=fluid.ParamAttr(name='fc_weight'),
-        bias_attr=fluid.ParamAttr(name='fc_bias'))
+    response_fc = fluid.layers.fc(input=response_emb,
+                                  size=args.hidden_size * 4,
+                                  param_attr=fluid.ParamAttr(name='fc_weight'),
+                                  bias_attr=fluid.ParamAttr(name='fc_bias'))
 
     #LSTM
     context_rep, _ = fluid.layers.dynamic_lstm(
@@ -82,7 +79,7 @@ def create_net(
     logits = fluid.layers.bilinear_tensor_product(
         context_rep, response_rep, size=1)
 
-    if args.loss_type == 'CLS': 
+    if args.loss_type == 'CLS':
         label = fluid.layers.cast(x=label, dtype='float32')
         loss = fluid.layers.sigmoid_cross_entropy_with_logits(logits, label)
         loss = fluid.layers.reduce_mean(
@@ -95,10 +92,10 @@ def create_net(
         loss = fluid.layers.reduce_mean(loss)
     else:
         raise ValueError
-    
-    if is_training: 
+
+    if is_training:
         return loss
-    else: 
+    else:
         return logits
 
 
@@ -106,7 +103,5 @@ def set_word_embedding(word_emb, place, word_emb_name="shared_word_emb"):
     """
     Set word embedding
     """
-    word_emb_param = fluid.global_scope().find_var(
-        word_emb_name).get_tensor()
+    word_emb_param = fluid.global_scope().find_var(word_emb_name).get_tensor()
     word_emb_param.set(word_emb, place)
-
diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/data/config/ade.yaml b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/data/config/ade.yaml
similarity index 100%
rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/data/config/ade.yaml
rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/data/config/ade.yaml
diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/data/inference_models/inference_models.md b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/data/inference_models/inference_models.md
similarity index 100%
rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/data/inference_models/inference_models.md
rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/data/inference_models/inference_models.md
diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/data/input/input.md b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/data/input/input.md
similarity index 100%
rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/data/input/input.md
rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/data/input/input.md
diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/data/output/output.md b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/data/output/output.md
similarity index 100%
rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/data/output/output.md
rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/data/output/output.md
diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/data/pretrain_model/pretrain_model.md b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/data/pretrain_model/pretrain_model.md
similarity index 100%
rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/data/pretrain_model/pretrain_model.md
rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/data/pretrain_model/pretrain_model.md
diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/data/saved_models/saved_models.md b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/data/saved_models/saved_models.md
similarity index 100%
rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/data/saved_models/saved_models.md
rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/data/saved_models/saved_models.md
diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/eval.py b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/eval.py
similarity index 86%
rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/eval.py
rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/eval.py
index edac4669ffccd1d8e250c631c949ab620af14b7d..26f06f1c5150653e880056a820861e5cfb224ed3 100644
--- a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/eval.py
+++ b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/eval.py
@@ -23,13 +23,13 @@ import ade.evaluate as evaluate
 from ade.utils.configure import PDConfig
 
 
-def do_eval(args): 
+def do_eval(args):
     """evaluate metrics"""
     labels = []
     fr = io.open(args.evaluation_file, 'r', encoding="utf8")
-    for line in fr: 
+    for line in fr:
         tokens = line.strip().split('\t')
-        assert len(tokens) == 3 
+        assert len(tokens) == 3
         label = int(tokens[2])
         labels.append(label)
 
@@ -43,25 +43,25 @@ def do_eval(args):
         score = score.astype(np.float64)
         scores.append(score)
 
-    if args.loss_type == 'CLS': 
+    if args.loss_type == 'CLS':
         recall_dict = evaluate.evaluate_Recall(list(zip(scores, labels)))
         mean_score = sum(scores) / len(scores)
         print('mean score: %.6f' % mean_score)
         print('evaluation recall result:')
         print('1_in_2: %.6f\t1_in_10: %.6f\t2_in_10: %.6f\t5_in_10: %.6f' %
-             (recall_dict['1_in_2'], recall_dict['1_in_10'],
-             recall_dict['2_in_10'], recall_dict['5_in_10']))
-    elif args.loss_type == 'L2': 
+              (recall_dict['1_in_2'], recall_dict['1_in_10'],
+               recall_dict['2_in_10'], recall_dict['5_in_10']))
+    elif args.loss_type == 'L2':
         scores = [x[0] for x in scores]
         mean_score = sum(scores) / len(scores)
         cor = evaluate.evaluate_cor(scores, labels)
         print('mean score: %.6f\nevaluation cor results:%.6f' %
-            (mean_score, cor))
+              (mean_score, cor))
     else:
         raise ValueError
-    
 
-if __name__ == "__main__": 
+
+if __name__ == "__main__":
     args = PDConfig(yaml_file="./data/config/ade.yaml")
     args.build()
 
diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/inference_model.py b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/inference_model.py
similarity index 70%
rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/inference_model.py
rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/inference_model.py
index ca0872d00f1a9e35efe9fdcef8ee25a15bf14889..5ef7bfc1a9378061d591308881253d06b4c1b3a0 100644
--- a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/inference_model.py
+++ b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/inference_model.py
@@ -42,22 +42,24 @@ def do_save_inference_model(args):
         with fluid.unique_name.guard():
 
             context_wordseq = fluid.data(
-                    name='context_wordseq', shape=[-1, 1], dtype='int64', lod_level=1)
+                name='context_wordseq',
+                shape=[-1, 1],
+                dtype='int64',
+                lod_level=1)
             response_wordseq = fluid.data(
-                    name='response_wordseq', shape=[-1, 1], dtype='int64', lod_level=1)
-            labels = fluid.data(
-                    name='labels', shape=[-1, 1], dtype='int64')
+                name='response_wordseq',
+                shape=[-1, 1],
+                dtype='int64',
+                lod_level=1)
+            labels = fluid.data(name='labels', shape=[-1, 1], dtype='int64')
 
             input_inst = [context_wordseq, response_wordseq, labels]
             input_field = InputField(input_inst)
-            data_reader = fluid.io.PyReader(feed_list=input_inst, 
-                        capacity=4, iterable=False)
+            data_reader = fluid.io.PyReader(
+                feed_list=input_inst, capacity=4, iterable=False)
 
             logits = create_net(
-                    is_training=False,
-                    model_input=input_field, 
-                    args=args
-                )
+                is_training=False, model_input=input_field, args=args)
 
     if args.use_cuda:
         place = fluid.CUDAPlace(0)
@@ -68,7 +70,7 @@ def do_save_inference_model(args):
     exe.run(startup_prog)
 
     assert (args.init_from_params) or (args.init_from_pretrain_model)
-    
+
     if args.init_from_params:
         save_load_io.init_from_params(args, exe, test_prog)
     elif args.init_from_pretrain_model:
@@ -76,24 +78,22 @@ def do_save_inference_model(args):
 
     # saving inference model
     fluid.io.save_inference_model(
-            args.inference_model_dir,
-            feeded_var_names=[
-                input_field.context_wordseq.name, 
-                input_field.response_wordseq.name,
-            ],
-            target_vars=[
-                logits,
-            ],
-            executor=exe,
-            main_program=test_prog,
-            model_filename="model.pdmodel",
-            params_filename="params.pdparams")
+        args.inference_model_dir,
+        feeded_var_names=[
+            input_field.context_wordseq.name,
+            input_field.response_wordseq.name,
+        ],
+        target_vars=[logits, ],
+        executor=exe,
+        main_program=test_prog,
+        model_filename="model.pdmodel",
+        params_filename="params.pdparams")
 
     print("save inference model at %s" % (args.inference_model_dir))
 
 
 if __name__ == "__main__":
-    args = PDConfig(yaml_file="./data/config/ade.yaml")   
+    args = PDConfig(yaml_file="./data/config/ade.yaml")
     args.build()
 
     check_cuda(args.use_cuda)
diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/main.py b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/main.py
similarity index 99%
rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/main.py
rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/main.py
index d474aa190e6b7559e2ef69da507829eda2bf9206..f26969aa400589d41e6932277091b9b5374111ac 100644
--- a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/main.py
+++ b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/main.py
@@ -26,7 +26,6 @@ from inference_model import do_save_inference_model
 
 from ade.utils.configure import PDConfig
 
-
 if __name__ == "__main__":
 
     args = PDConfig(yaml_file="./data/config/ade.yaml")
diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/predict.py b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/predict.py
similarity index 78%
rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/predict.py
rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/predict.py
index 279dff8844a262fbec13df81513642f999d8592b..6f5a081f93cb1c81048b65e555378308968813aa 100644
--- a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/predict.py
+++ b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/predict.py
@@ -32,7 +32,7 @@ from ade.utils.model_check import check_cuda
 import ade.utils.save_load_io as save_load_io
 
 
-def do_predict(args): 
+def do_predict(args):
     """
     predict function
     """
@@ -46,30 +46,32 @@ def do_predict(args):
         with fluid.unique_name.guard():
 
             context_wordseq = fluid.data(
-                    name='context_wordseq', shape=[-1, 1], dtype='int64', lod_level=1)
+                name='context_wordseq',
+                shape=[-1, 1],
+                dtype='int64',
+                lod_level=1)
             response_wordseq = fluid.data(
-                    name='response_wordseq', shape=[-1, 1], dtype='int64', lod_level=1)
-            labels = fluid.data(
-                    name='labels', shape=[-1, 1], dtype='int64')
+                name='response_wordseq',
+                shape=[-1, 1],
+                dtype='int64',
+                lod_level=1)
+            labels = fluid.data(name='labels', shape=[-1, 1], dtype='int64')
 
             input_inst = [context_wordseq, response_wordseq, labels]
             input_field = InputField(input_inst)
-            data_reader = fluid.io.PyReader(feed_list=input_inst, 
-                        capacity=4, iterable=False)
+            data_reader = fluid.io.PyReader(
+                feed_list=input_inst, capacity=4, iterable=False)
 
             logits = create_net(
-                    is_training=False,
-                    model_input=input_field, 
-                    args=args
-                )
+                is_training=False, model_input=input_field, args=args)
             logits.persistable = True
 
             fetch_list = [logits.name]
     #for_test is True if change the is_test attribute of operators to True
     test_prog = test_prog.clone(for_test=True)
-    if args.use_cuda: 
+    if args.use_cuda:
         place = fluid.CUDAPlace(int(os.getenv('FLAGS_selected_gpus', '0')))
-    else: 
+    else:
         place = fluid.CPUPlace()
 
     exe = fluid.Executor(place)
@@ -85,42 +87,39 @@ def do_predict(args):
 
     processor = reader.DataProcessor(
         data_path=args.predict_file,
-        max_seq_length=args.max_seq_len, 
+        max_seq_length=args.max_seq_len,
         batch_size=args.batch_size)
 
     batch_generator = processor.data_generator(
-        place=place,
-        phase="test",
-        shuffle=False, 
-        sample_pro=1)
+        place=place, phase="test", shuffle=False, sample_pro=1)
     num_test_examples = processor.get_num_examples(phase='test')
 
     data_reader.decorate_batch_generator(batch_generator)
     data_reader.start()
 
     scores = []
-    while True: 
-        try: 
+    while True:
+        try:
             results = exe.run(compiled_test_prog, fetch_list=fetch_list)
             scores.extend(results[0])
         except fluid.core.EOFException:
             data_reader.reset()
             break
 
-    scores = scores[: num_test_examples]
+    scores = scores[:num_test_examples]
     print("Write the predicted results into the output_prediction_file")
     fw = io.open(args.output_prediction_file, 'w', encoding="utf8")
-    for index, score in enumerate(scores): 
+    for index, score in enumerate(scores):
         fw.write("%s\t%s\n" % (index, score))
     print("finish........................................")
 
 
-if __name__ == "__main__": 
-    
+if __name__ == "__main__":
+
     args = PDConfig(yaml_file="./data/config/ade.yaml")
     args.build()
     args.Print()
 
     check_cuda(args.use_cuda)
 
-    do_predict(args) 
+    do_predict(args)
diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/run.sh b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/run.sh
similarity index 100%
rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/run.sh
rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/run.sh
diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/train.py b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/train.py
similarity index 71%
rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/train.py
rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/train.py
index f9a8b28153899bf7e5aaa34c19276fa0158043ce..c1939866d5a3905b0e3ffb976ee042b5c0a65cd9 100755
--- a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/train.py
+++ b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/train.py
@@ -31,7 +31,7 @@ from ade.utils.input_field import InputField
 from ade.utils.model_check import check_cuda
 import ade.utils.save_load_io as save_load_io
 
-try: 
+try:
     import cPickle as pickle  #python 2
 except ImportError as e:
     import pickle  #python 3
@@ -47,24 +47,26 @@ def do_train(args):
         train_prog.random_seed = args.random_seed
         startup_prog.random_seed = args.random_seed
 
-        with fluid.unique_name.guard(): 
+        with fluid.unique_name.guard():
             context_wordseq = fluid.data(
-                    name='context_wordseq', shape=[-1, 1], dtype='int64', lod_level=1)
+                name='context_wordseq',
+                shape=[-1, 1],
+                dtype='int64',
+                lod_level=1)
             response_wordseq = fluid.data(
-                    name='response_wordseq', shape=[-1, 1], dtype='int64', lod_level=1)
-            labels = fluid.data(
-                    name='labels', shape=[-1, 1], dtype='int64')
+                name='response_wordseq',
+                shape=[-1, 1],
+                dtype='int64',
+                lod_level=1)
+            labels = fluid.data(name='labels', shape=[-1, 1], dtype='int64')
 
             input_inst = [context_wordseq, response_wordseq, labels]
             input_field = InputField(input_inst)
-            data_reader = fluid.io.PyReader(feed_list=input_inst, 
-                        capacity=4, iterable=False)
+            data_reader = fluid.io.PyReader(
+                feed_list=input_inst, capacity=4, iterable=False)
 
             loss = create_net(
-                    is_training=True,
-                    model_input=input_field, 
-                    args=args
-                )
+                is_training=True, model_input=input_field, args=args)
             loss.persistable = True
             # gradient clipping
             fluid.clip.set_gradient_clip(clip=fluid.clip.GradientClipByValue(
@@ -74,20 +76,21 @@ def do_train(args):
 
             if args.use_cuda:
                 dev_count = fluid.core.get_cuda_device_count()
-                place = fluid.CUDAPlace(int(os.getenv('FLAGS_selected_gpus', '0')))
-            else: 
+                place = fluid.CUDAPlace(
+                    int(os.getenv('FLAGS_selected_gpus', '0')))
+            else:
                 dev_count = int(os.environ.get('CPU_NUM', 1))
                 place = fluid.CPUPlace()
 
             processor = reader.DataProcessor(
                 data_path=args.training_file,
-                max_seq_length=args.max_seq_len, 
+                max_seq_length=args.max_seq_len,
                 batch_size=args.batch_size)
 
             batch_generator = processor.data_generator(
                 place=place,
                 phase="train",
-                shuffle=True, 
+                shuffle=True,
                 sample_pro=args.sample_pro)
 
             num_train_examples = processor.get_num_examples(phase='train')
@@ -105,18 +108,23 @@ def do_train(args):
         args.init_from_pretrain_model == "")
 
     #init from some checkpoint, to resume the previous training
-    if args.init_from_checkpoint: 
+    if args.init_from_checkpoint:
         save_load_io.init_from_checkpoint(args, exe, train_prog)
     #init from some pretrain models, to better solve the current task
-    if args.init_from_pretrain_model: 
+    if args.init_from_pretrain_model:
         save_load_io.init_from_pretrain_model(args, exe, train_prog)
 
     if args.word_emb_init:
         print("start loading word embedding init ...")
         if six.PY2:
-            word_emb = np.array(pickle.load(io.open(args.word_emb_init, 'rb'))).astype('float32')
+            word_emb = np.array(
+                pickle.load(io.open(args.word_emb_init, 'rb'))).astype(
+                    'float32')
         else:
-            word_emb = np.array(pickle.load(io.open(args.word_emb_init, 'rb'), encoding="bytes")).astype('float32')
+            word_emb = np.array(
+                pickle.load(
+                    io.open(args.word_emb_init, 'rb'),
+                    encoding="bytes")).astype('float32')
         set_word_embedding(word_emb, place)
         print("finish init word embedding  ...")
 
@@ -124,69 +132,74 @@ def do_train(args):
     build_strategy.enable_inplace = True
 
     compiled_train_prog = fluid.CompiledProgram(train_prog).with_data_parallel(
-                loss_name=loss.name, build_strategy=build_strategy)
+        loss_name=loss.name, build_strategy=build_strategy)
 
     steps = 0
     begin_time = time.time()
-    time_begin =  time.time()
+    time_begin = time.time()
 
-    for epoch_step in range(args.epoch): 
+    for epoch_step in range(args.epoch):
         data_reader.start()
         sum_loss = 0.0
         ce_loss = 0.0
         while True:
-            try: 
+            try:
                 fetch_list = [loss.name]
                 outputs = exe.run(compiled_train_prog, fetch_list=fetch_list)
                 np_loss = outputs
                 sum_loss += np.array(np_loss).mean()
                 ce_loss = np.array(np_loss).mean()
 
-                if steps % args.print_steps == 0: 
+                if steps % args.print_steps == 0:
                     time_end = time.time()
                     used_time = time_end - time_begin
                     current_time = time.strftime('%Y-%m-%d %H:%M:%S',
-                                                time.localtime(time.time()))
-                    print('%s epoch: %d, step: %s, avg loss %s, speed: %f steps/s' % (current_time, epoch_step, steps, sum_loss / args.print_steps, args.print_steps / used_time))
+                                                 time.localtime(time.time()))
+                    print(
+                        '%s epoch: %d, step: %s, avg loss %s, speed: %f steps/s'
+                        % (current_time, epoch_step, steps, sum_loss /
+                           args.print_steps, args.print_steps / used_time))
                     sum_loss = 0.0
                     time_begin = time.time()
 
-                if steps % args.save_steps == 0: 
+                if steps % args.save_steps == 0:
                     if args.save_checkpoint:
-                        save_load_io.save_checkpoint(args, exe, train_prog, "step_" + str(steps))
-                    if args.save_param: 
-                        save_load_io.save_param(args, exe, train_prog, "step_" + str(steps))
+                        save_load_io.save_checkpoint(args, exe, train_prog,
+                                                     "step_" + str(steps))
+                    if args.save_param:
+                        save_load_io.save_param(args, exe, train_prog,
+                                                "step_" + str(steps))
                 steps += 1
-            except fluid.core.EOFException:  
+            except fluid.core.EOFException:
                 data_reader.reset()
                 break
-    
-    if args.save_checkpoint: 
+
+    if args.save_checkpoint:
         save_load_io.save_checkpoint(args, exe, train_prog, "step_final")
-    if args.save_param: 
+    if args.save_param:
         save_load_io.save_param(args, exe, train_prog, "step_final")
 
-    def get_cards(): 
+    def get_cards():
         num = 0
         cards = os.environ.get('CUDA_VISIBLE_DEVICES', '')
-        if cards != '': 
+        if cards != '':
             num = len(cards.split(","))
         return num
 
-    if args.enable_ce: 
+    if args.enable_ce:
         card_num = get_cards()
         pass_time_cost = time.time() - begin_time
         print("test_card_num", card_num)
         print("kpis\ttrain_duration_card%s\t%s" % (card_num, pass_time_cost))
         print("kpis\ttrain_loss_card%s\t%f" % (card_num, ce_loss))
-        
+
 
 if __name__ == '__main__':
-    
+
     args = PDConfig(yaml_file="./data/config/ade.yaml")
     args.build()
     args.Print()
 
     check_cuda(args.use_cuda)
-    
+
     do_train(args)
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/.run_ce.sh b/PaddleNLP/dialogue_system/dialogue_general_understanding/.run_ce.sh
similarity index 100%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/.run_ce.sh
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/.run_ce.sh
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/README.md b/PaddleNLP/dialogue_system/dialogue_general_understanding/README.md
similarity index 97%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/README.md
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/README.md
index 9942c8cdd07b91b7850f8143e1c55a2145a5900a..2a817ec15a60e4d3c666abf29011b819429e6a66 100644
--- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/README.md
+++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/README.md
@@ -62,7 +62,7 @@ SWDA:Switchboard Dialogue Act Corpus;
     数据集、相关模型下载:
     linux环境下:
 ```
-python dgu/prepare_data_and_model.py 
+python dgu/prepare_data_and_model.py
 ```
     数据路径:data/input/data
 
@@ -72,7 +72,7 @@ python dgu/prepare_data_and_model.py
 
     windows环境下:
 ```
-python dgu\prepare_data_and_model.py 
+python dgu\prepare_data_and_model.py
 ```
 
     下载的数据集中已提供了训练集,测试集和验证集,用户如果需要重新生成某任务数据集的训练数据,可执行:
@@ -164,19 +164,19 @@ task_type: train,predict, evaluate, inference, all, 选择5个参数选项中
 训练示例: bash run.sh atis_intent train
 ```
 
-    如果为CPU训练: 
+    如果为CPU训练:
 
 ```
-请将run.sh内参数设置为: 
+请将run.sh内参数设置为:
 1、export CUDA_VISIBLE_DEVICES=
 ```
 
-    如果为GPU训练: 
+    如果为GPU训练:
 
 ```
-请将run.sh内参数设置为: 
+请将run.sh内参数设置为:
 1、如果为单卡训练(用户指定空闲的单卡):
-export CUDA_VISIBLE_DEVICES=0 
+export CUDA_VISIBLE_DEVICES=0
 2、如果为多卡训练(用户指定空闲的多张卡):
 export CUDA_VISIBLE_DEVICES=0,1,2,3
 ```
@@ -252,19 +252,19 @@ task_type: train,predict, evaluate, inference, all, 选择5个参数选项中
 预测示例: bash run.sh atis_intent predict
 ```
 
-    如果为CPU预测: 
+    如果为CPU预测:
 
 ```
-请将run.sh内参数设置为: 
+请将run.sh内参数设置为:
 1、export CUDA_VISIBLE_DEVICES=
 ```
 
-    如果为GPU预测: 
+    如果为GPU预测:
 
 ```
-请将run.sh内参数设置为: 
+请将run.sh内参数设置为:
 支持单卡预测(用户指定空闲的单卡):
-export CUDA_VISIBLE_DEVICES=0 
+export CUDA_VISIBLE_DEVICES=0
 ```
 
 注:预测时,如采用方式一,用户可通过修改run.sh中init_from_params参数来指定自己训练好的需要预测的模型,目前代码中默认为加载官方已经训练好的模型;
@@ -348,7 +348,7 @@ task_type: train,predict, evaluate, inference, all, 选择5个参数选项中
 
 注:评估计算ground_truth和predict_label之间的打分,默认CPU计算即可;
 
-####     方式二: 执行评估相关的代码: 
+####     方式二: 执行评估相关的代码:
 
 ```
 TASK_NAME="atis_intent"  #指定预测的任务名称
@@ -363,7 +363,7 @@ python -u main.py \
 
 #### windows环境下
 ```
-python -u main.py --task_name=atis_intent --use_cuda=false --do_eval=true --evaluation_file=data\input\data\atis\atis_intent\test.txt --output_prediction_file=data\output\pred_atis_intent 
+python -u main.py --task_name=atis_intent --use_cuda=false --do_eval=true --evaluation_file=data\input\data\atis\atis_intent\test.txt --output_prediction_file=data\output\pred_atis_intent
 ```
 
 ### 模型推断
@@ -378,22 +378,22 @@ task_type: train,predict, evaluate, inference, all, 选择5个参数选项中
 保存模型示例: bash run.sh atis_intent inference
 ```
 
-    如果为CPU执行inference model过程: 
+    如果为CPU执行inference model过程:
 
 ```
-请将run.sh内参数设置为: 
+请将run.sh内参数设置为:
 1、export CUDA_VISIBLE_DEVICES=
 ```
 
     如果为GPU执行inference model过程:
 
 ```
-请将run.sh内参数设置为: 
+请将run.sh内参数设置为:
 1、单卡模型推断(用户指定空闲的单卡):
 export CUDA_VISIBLE_DEVICES=0
 ```
 
-####     方式二: 执行inference model相关的代码: 
+####     方式二: 执行inference model相关的代码:
 
 ```
 TASK_NAME="atis_intent"  #指定预测的任务名称
@@ -459,7 +459,7 @@ python -u main.py \
 
     用户也可以根据自己的需求,组建自定义的模型,具体方法如下所示:
 
-    a、自定义数据 
+    a、自定义数据
 
       如用户目前有数据集为**task_name**, 则在**data/input/data**下定义**task_name**文件夹,将数据集存放进去;在**dgu/reader.py**中,新增自定义的数据处理的类,如**udc**数据集对应**UDCProcessor**;  在**train.py**内设置**task_name**和**processor**的对应关系(如**processors = {'udc': reader.UDCProcessor}**).
 
@@ -481,7 +481,7 @@ python -u main.py \
 - Elizabeth Shriberg, Raj Dhillon, Sonali Bhagat, JeremyAng, and Hannah Carvey. 2004. The icsi meetingrecorder dialog act (mrda) corpus. Technical report,INTERNATIONAL COMPUTER SCIENCE INSTBERKELEY CA.
 - Andreas Stolcke, Klaus Ries, Noah Coccaro, Eliza-beth Shriberg, Rebecca Bates, Daniel Jurafsky, PaulTaylor, Rachel Martin, Carol Van Ess-Dykema, andMarie Meteer. 2000. Dialogue act modeling for au-tomatic tagging and recognition of conversationalspeech.Computational linguistics, 26(3):339–373.
 - Ye-Yi Wang, Li Deng, and Alex Acero. 2005.  Spo-ken language understanding.IEEE Signal Process-ing Magazine, 22(5):16–31.Jason Williams, Antoine Raux, Deepak Ramachan-dran, and Alan Black. 2013. The dialog state tracking challenge.  InProceedings of the SIGDIAL 2013Conference, pages 404–413.
-- Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc VLe,  Mohammad Norouzi,  Wolfgang Macherey,Maxim  Krikun,  Yuan  Cao,  Qin  Gao,  KlausMacherey,  et al. 2016.   Google’s neural ma-chine translation system: Bridging the gap betweenhuman and machine translation.arXiv preprintarXiv:1609.08144.Kaisheng 
+- Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc VLe,  Mohammad Norouzi,  Wolfgang Macherey,Maxim  Krikun,  Yuan  Cao,  Qin  Gao,  KlausMacherey,  et al. 2016.   Google’s neural ma-chine translation system: Bridging the gap betweenhuman and machine translation.arXiv preprintarXiv:1609.08144.Kaisheng
 - Yao, Geoffrey Zweig, Mei-Yuh Hwang,Yangyang Shi, and Dong Yu. 2013. Recurrent neu-ral networks for language understanding. InInter-speech, pages 2524–2528.
 - Xiangyang Zhou, Lu Li, Daxiang Dong, Yi Liu, YingChen, Wayne Xin Zhao, Dianhai Yu, and Hua Wu.2018.  Multi-turn response selection for chatbotswith deep attention matching network. InProceed-ings of the 56th Annual Meeting of the Associationfor Computational Linguistics (Volume 1: Long Pa-pers), volume 1, pages 1118–1127.
 - Su Zhu and Kai Yu. 2017.  Encoder-decoder withfocus-mechanism for sequence labelling based spo-ken language understanding. In2017 IEEE Interna-tional Conference on Acoustics, Speech and SignalProcessing (ICASSP), pages 5675–5679. IEEE.
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/_ce.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/_ce.py
similarity index 67%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/_ce.py
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/_ce.py
index 0a3a4c20d51845a2deb5a9aad04d7af080d0445b..f9e2798d6469eb6e10a774d1a62cd90a2ed11385 100644
--- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/_ce.py
+++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/_ce.py
@@ -20,20 +20,26 @@ from kpi import CostKpi
 from kpi import DurationKpi
 from kpi import AccKpi
 
-each_step_duration_atis_slot_card1 = DurationKpi('each_step_duration_atis_slot_card1', 0.01, 0, actived=True)
-train_loss_atis_slot_card1 = CostKpi('train_loss_atis_slot_card1', 0.08, 0, actived=True)
-train_acc_atis_slot_card1 = CostKpi('train_acc_atis_slot_card1', 0.01, 0, actived=True)
-each_step_duration_atis_slot_card4 = DurationKpi('each_step_duration_atis_slot_card4', 0.06, 0, actived=True)
-train_loss_atis_slot_card4 = CostKpi('train_loss_atis_slot_card4', 0.03, 0, actived=True)
-train_acc_atis_slot_card4 = CostKpi('train_acc_atis_slot_card4', 0.01, 0, actived=True)
+each_step_duration_atis_slot_card1 = DurationKpi(
+    'each_step_duration_atis_slot_card1', 0.01, 0, actived=True)
+train_loss_atis_slot_card1 = CostKpi(
+    'train_loss_atis_slot_card1', 0.08, 0, actived=True)
+train_acc_atis_slot_card1 = CostKpi(
+    'train_acc_atis_slot_card1', 0.01, 0, actived=True)
+each_step_duration_atis_slot_card4 = DurationKpi(
+    'each_step_duration_atis_slot_card4', 0.06, 0, actived=True)
+train_loss_atis_slot_card4 = CostKpi(
+    'train_loss_atis_slot_card4', 0.03, 0, actived=True)
+train_acc_atis_slot_card4 = CostKpi(
+    'train_acc_atis_slot_card4', 0.01, 0, actived=True)
 
 tracking_kpis = [
-        each_step_duration_atis_slot_card1,
-        train_loss_atis_slot_card1,
-        train_acc_atis_slot_card1,
-        each_step_duration_atis_slot_card4,
-        train_loss_atis_slot_card4,
-        train_acc_atis_slot_card4,
+    each_step_duration_atis_slot_card1,
+    train_loss_atis_slot_card1,
+    train_acc_atis_slot_card1,
+    each_step_duration_atis_slot_card4,
+    train_loss_atis_slot_card4,
+    train_acc_atis_slot_card4,
 ]
 
 
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/data/config/dgu.yaml b/PaddleNLP/dialogue_system/dialogue_general_understanding/data/config/dgu.yaml
similarity index 100%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/data/config/dgu.yaml
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/data/config/dgu.yaml
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/data/inference_models/inference_models.md b/PaddleNLP/dialogue_system/dialogue_general_understanding/data/inference_models/inference_models.md
similarity index 100%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/data/inference_models/inference_models.md
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/data/inference_models/inference_models.md
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/data/input/input.md b/PaddleNLP/dialogue_system/dialogue_general_understanding/data/input/input.md
similarity index 100%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/data/input/input.md
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/data/input/input.md
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/data/output/output.md b/PaddleNLP/dialogue_system/dialogue_general_understanding/data/output/output.md
similarity index 100%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/data/output/output.md
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/data/output/output.md
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/data/pretrain_model/pretrain_model.md b/PaddleNLP/dialogue_system/dialogue_general_understanding/data/pretrain_model/pretrain_model.md
similarity index 100%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/data/pretrain_model/pretrain_model.md
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/data/pretrain_model/pretrain_model.md
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/data/saved_models/saved_models.md b/PaddleNLP/dialogue_system/dialogue_general_understanding/data/saved_models/saved_models.md
similarity index 100%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/data/saved_models/saved_models.md
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/data/saved_models/saved_models.md
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/__init__.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/__init__.py
similarity index 100%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/__init__.py
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/__init__.py
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/batching.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/batching.py
similarity index 88%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/batching.py
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/batching.py
index d668fd7dfeb0d0fa0bf025a13b8838f700d130b8..a81d2e9960d174f7c1be938c4968db188c2bb9ee 100644
--- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/batching.py
+++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/batching.py
@@ -75,8 +75,8 @@ def mask(batch_tokens, total_token_num, vocab_size, CLS=1, SEP=2, MASK=3):
 
 
 def prepare_batch_data(task_name,
-                       insts, 
-                       max_len, 
+                       insts,
+                       max_len,
                        total_token_num,
                        voc_size=0,
                        pad_id=None,
@@ -98,14 +98,18 @@ def prepare_batch_data(task_name,
     # compatible with squad, whose example includes start/end positions, 
     # or unique id
 
-    if isinstance(insts[0][3], list): 
-        if task_name == "atis_slot": 
-            labels_list = [inst[3] + [0] * (max_len - len(inst[3])) for inst in insts]
-            labels_list = [np.array(labels_list).astype("int64").reshape([-1, max_len])]
-        elif task_name == "dstc2": 
+    if isinstance(insts[0][3], list):
+        if task_name == "atis_slot":
+            labels_list = [
+                inst[3] + [0] * (max_len - len(inst[3])) for inst in insts
+            ]
+            labels_list = [
+                np.array(labels_list).astype("int64").reshape([-1, max_len])
+            ]
+        elif task_name == "dstc2":
             labels_list = [inst[3] for inst in insts]
             labels_list = [np.array(labels_list).astype("int64")]
-    else: 
+    else:
         for i in range(3, len(insts[0]), 1):
             labels = [inst[i] for inst in insts]
             labels = np.array(labels).astype("int64").reshape([-1, 1])
@@ -124,28 +128,25 @@ def prepare_batch_data(task_name,
         out = batch_src_ids
     # Second step: padding
     src_id, self_input_mask = pad_batch_data(
-        out, 
-        max_len, 
-        pad_idx=pad_id, 
-        return_input_mask=True)
+        out, max_len, pad_idx=pad_id, return_input_mask=True)
     pos_id = pad_batch_data(
-        batch_pos_ids, 
-        max_len, 
-        pad_idx=pad_id, 
-        return_pos=False, 
+        batch_pos_ids,
+        max_len,
+        pad_idx=pad_id,
+        return_pos=False,
         return_input_mask=False)
     sent_id = pad_batch_data(
-        batch_sent_ids, 
-        max_len, 
-        pad_idx=pad_id, 
-        return_pos=False, 
+        batch_sent_ids,
+        max_len,
+        pad_idx=pad_id,
+        return_pos=False,
         return_input_mask=False)
 
     if mask_id >= 0:
         return_list = [
             src_id, pos_id, sent_id, self_input_mask, mask_label, mask_pos
         ] + labels_list
-    else: 
+    else:
         return_list = [src_id, pos_id, sent_id, self_input_mask] + labels_list
 
     return return_list if len(return_list) > 1 else return_list[0]
@@ -163,13 +164,13 @@ def pad_batch_data(insts,
     corresponding position data and attention bias.
     """
     return_list = []
-    max_len = max_len_in if max_len_in != -1 else max(len(inst) for inst in insts)
+    max_len = max_len_in if max_len_in != -1 else max(
+        len(inst) for inst in insts)
     # Any token included in dict can be used to pad, since the paddings' loss
     # will be masked out by weights and make no effect on parameter gradients.
 
     inst_data = np.array(
-        [inst + list([pad_idx] * (max_len - len(inst))) for inst in insts
-    ])
+        [inst + list([pad_idx] * (max_len - len(inst))) for inst in insts])
     return_list += [inst_data.astype("int64").reshape([-1, max_len])]
 
     # position data
@@ -183,10 +184,10 @@ def pad_batch_data(insts,
 
     if return_input_mask:
         # This is used to avoid attention on paddings.
-        input_mask_data = np.array([[1] * len(inst) + [0] * 
+        input_mask_data = np.array([[1] * len(inst) + [0] *
                                     (max_len - len(inst)) for inst in insts])
         input_mask_data = np.expand_dims(input_mask_data, axis=-1)
-        return_list += [input_mask_data.astype("float32")] 
+        return_list += [input_mask_data.astype("float32")]
 
     if return_max_len:
         return_list += [max_len]
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/bert.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/bert.py
similarity index 100%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/bert.py
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/bert.py
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/define_paradigm.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/define_paradigm.py
similarity index 100%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/define_paradigm.py
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/define_paradigm.py
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/define_predict_pack.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/define_predict_pack.py
similarity index 66%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/define_predict_pack.py
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/define_predict_pack.py
index df0fda32d44a4024805937e59c3f2636f74f9286..13b14f8102f51cfb6b74769d8eb392f0b102d65b 100644
--- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/define_predict_pack.py
+++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/define_predict_pack.py
@@ -21,31 +21,34 @@ import paddle
 import paddle.fluid as fluid
 
 
-class DefinePredict(object): 
+class DefinePredict(object):
     """
     Packaging Prediction Results
     """
-    def __init__(self): 
+
+    def __init__(self):
         """
         init
         """
-        self.task_map = {'udc': 'get_matching_res', 
-                         'swda': 'get_cls_res', 
-                         'mrda': 'get_cls_res', 
-                         'atis_intent': 'get_cls_res',
-                         'atis_slot': 'get_sequence_tagging',
-                         'dstc2': 'get_multi_cls_res', 
-                         'dstc2_asr': 'get_multi_cls_res', 
-                         'multi-woz': 'get_multi_cls_res'}
+        self.task_map = {
+            'udc': 'get_matching_res',
+            'swda': 'get_cls_res',
+            'mrda': 'get_cls_res',
+            'atis_intent': 'get_cls_res',
+            'atis_slot': 'get_sequence_tagging',
+            'dstc2': 'get_multi_cls_res',
+            'dstc2_asr': 'get_multi_cls_res',
+            'multi-woz': 'get_multi_cls_res'
+        }
 
-    def get_matching_res(self, probs, params=None): 
+    def get_matching_res(self, probs, params=None):
         """
         get matching score
         """
         probs = list(probs)
         return probs[1]
 
-    def get_cls_res(self, probs, params=None): 
+    def get_cls_res(self, probs, params=None):
         """
         get da classify tag
         """
@@ -54,7 +57,7 @@ class DefinePredict(object):
         tag = probs.index(max_prob)
         return tag
 
-    def get_sequence_tagging(self, probs, params=None): 
+    def get_sequence_tagging(self, probs, params=None):
         """
         get sequence tagging tag
         """
@@ -63,23 +66,19 @@ class DefinePredict(object):
         labels = [" ".join([str(l) for l in list(l_l)]) for l_l in batch_labels]
         return labels
 
-    def get_multi_cls_res(self, probs, params=None): 
+    def get_multi_cls_res(self, probs, params=None):
         """
         get dst classify tag
         """
         labels = []
         probs = list(probs)
-        for i in range(len(probs)): 
-            if probs[i] >= 0.5: 
+        for i in range(len(probs)):
+            if probs[i] >= 0.5:
                 labels.append(i)
-        if not labels: 
+        if not labels:
             max_prob = max(probs)
             label_str = str(probs.index(max_prob))
-        else: 
+        else:
             label_str = " ".join([str(l) for l in sorted(labels)])
 
         return label_str
-
-
-
-
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/evaluation.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/evaluation.py
similarity index 100%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/evaluation.py
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/evaluation.py
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/optimization.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/optimization.py
similarity index 100%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/optimization.py
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/optimization.py
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/prepare_data_and_model.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/prepare_data_and_model.py
similarity index 58%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/prepare_data_and_model.py
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/prepare_data_and_model.py
index d47f8c4bb183811617d7612f31a7f706f6accda9..83c72064c7cf9e0bc8822f0254fc468d51678f3e 100644
--- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/prepare_data_and_model.py
+++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/prepare_data_and_model.py
@@ -20,51 +20,60 @@ import sys
 import io
 import os
 
-
-URLLIB=urllib
-if sys.version_info >= (3, 0): 
+URLLIB = urllib
+if sys.version_info >= (3, 0):
     import urllib.request
-    URLLIB=urllib.request
+    URLLIB = urllib.request
 
-DATA_MODEL_PATH = {"DATA_PATH": "https://baidu-nlp.bj.bcebos.com/dmtk_data_1.0.0.tar.gz", 
-                   "PRETRAIN_MODEL": "https://bert-models.bj.bcebos.com/uncased_L-12_H-768_A-12.tar.gz", 
-                   "TRAINED_MODEL": "https://baidu-nlp.bj.bcebos.com/dgu_models_2.0.0.tar.gz"} 
+DATA_MODEL_PATH = {
+    "DATA_PATH": "https://baidu-nlp.bj.bcebos.com/dmtk_data_1.0.0.tar.gz",
+    "PRETRAIN_MODEL":
+    "https://bert-models.bj.bcebos.com/uncased_L-12_H-768_A-12.tar.gz",
+    "TRAINED_MODEL": "https://baidu-nlp.bj.bcebos.com/dgu_models_2.0.0.tar.gz"
+}
 
-PATH_MAP = {'DATA_PATH': "./data/input", 
-            'PRETRAIN_MODEL': './data/pretrain_model', 
-            'TRAINED_MODEL': './data/saved_models'}
+PATH_MAP = {
+    'DATA_PATH': "./data/input",
+    'PRETRAIN_MODEL': './data/pretrain_model',
+    'TRAINED_MODEL': './data/saved_models'
+}
 
 
-def un_tar(tar_name, dir_name): 
-    try: 
+def un_tar(tar_name, dir_name):
+    try:
         t = tarfile.open(tar_name)
-        t.extractall(path = dir_name)
+        t.extractall(path=dir_name)
         return True
     except Exception as e:
         print(e)
         return False
 
 
-def download_model_and_data(): 
+def download_model_and_data():
     print("Downloading dgu data, pretrain model and trained models......")
     print("This process is quite long, please wait patiently............")
-    for path in ['./data/input/data', './data/pretrain_model/uncased_L-12_H-768_A-12', './data/saved_models/trained_models']: 
-        if not os.path.exists(path): 
+    for path in [
+            './data/input/data',
+            './data/pretrain_model/uncased_L-12_H-768_A-12',
+            './data/saved_models/trained_models'
+    ]:
+        if not os.path.exists(path):
             continue
         shutil.rmtree(path)
-    for path_key in DATA_MODEL_PATH: 
+    for path_key in DATA_MODEL_PATH:
         filename = os.path.basename(DATA_MODEL_PATH[path_key])
-        URLLIB.urlretrieve(DATA_MODEL_PATH[path_key], os.path.join("./", filename))
+        URLLIB.urlretrieve(DATA_MODEL_PATH[path_key],
+                           os.path.join("./", filename))
         state = un_tar(filename, PATH_MAP[path_key])
-        if not state: 
+        if not state:
             print("Tar %s error....." % path_key)
             return False
         os.remove(filename)
     return True
 
 
-if __name__ == "__main__": 
+if __name__ == "__main__":
     state = download_model_and_data()
-    if not state: 
+    if not state:
         exit(1)
     print("Downloading data and models sucess......")
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/reader.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/reader.py
similarity index 100%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/reader.py
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/reader.py
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/README.md b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/README.md
similarity index 99%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/README.md
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/README.md
index 0f6f4f410fac28a469050b64fce77adaa1824671..351dc8d249ec1b460e9ef00ac100f56579791f07 100644
--- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/README.md
+++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/README.md
@@ -6,7 +6,7 @@ scripts:运行数据处理脚本目录, 将官方公开数据集转换成模
 python run_build_data.py udc
 生成数据在dialogue_general_understanding/data/input/data/udc
 
-2)、生成DA任务所需要的训练集、开发集、测试集时: 
+2)、生成DA任务所需要的训练集、开发集、测试集时:
   python run_build_data.py swda
   python run_build_data.py mrda
   生成数据分别在dialogue_general_understanding/data/input/data/swda和dialogue_general_understanding/data/input/data/mrda
@@ -19,6 +19,3 @@ python run_build_data.py udc
   python run_build_data.py atis
   生成槽位识别数据在dialogue_general_understanding/data/input/data/atis/atis_slot
   生成意图识别数据在dialogue_general_understanding/data/input/data/atis/atis_intent
-
-
-
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/build_atis_dataset.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/build_atis_dataset.py
similarity index 77%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/build_atis_dataset.py
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/build_atis_dataset.py
index 09f3746039d3d55f9b824e76bf8434e95bffa670..66b0d3ca3dc7fb5673d83fcaab24abc59524c950 100755
--- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/build_atis_dataset.py
+++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/build_atis_dataset.py
@@ -12,7 +12,6 @@
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # See the License for the specific language governing permissions and
 # limitations under the License.
-
 """build swda train dev test dataset"""
 
 import json
@@ -23,11 +22,12 @@ import io
 import re
 
 
-class ATIS(object): 
+class ATIS(object):
     """
     nlu dataset atis data process
     """
-    def __init__(self): 
+
+    def __init__(self):
         """
         init instance
         """
@@ -41,91 +41,94 @@ class ATIS(object):
         self.map_tag_slot = "../../data/input/data/atis/atis_slot/map_tag_slot_id.txt"
         self.map_tag_intent = "../../data/input/data/atis/atis_intent/map_tag_intent_id.txt"
 
-    def _load_file(self, data_type): 
+    def _load_file(self, data_type):
         """
         load dataset filename
         """
         slot_stat = os.path.exists(self.out_slot_dir)
-        if not slot_stat: 
+        if not slot_stat:
             os.makedirs(self.out_slot_dir)
         intent_stat = os.path.exists(self.out_intent_dir)
-        if not intent_stat: 
+        if not intent_stat:
             os.makedirs(self.out_intent_dir)
         src_examples = []
         json_file = os.path.join(self.src_dir, "%s.json" % data_type)
         load_f = io.open(json_file, 'r', encoding="utf8")
         json_dict = json.load(load_f)
         examples = json_dict['rasa_nlu_data']['common_examples']
-        for example in examples: 
+        for example in examples:
             text = example.get('text')
             intent = example.get('intent')
             entities = example.get('entities')
             src_examples.append((text, intent, entities))
         return src_examples
 
-    def _parser_intent_data(self, examples, data_type): 
+    def _parser_intent_data(self, examples, data_type):
         """
         parser intent dataset
         """
         out_filename = "%s/%s.txt" % (self.out_intent_dir, data_type)
         fw = io.open(out_filename, 'w', encoding="utf8")
-        for example in examples: 
-            if example[1] not in self.intent_dict: 
+        for example in examples:
+            if example[1] not in self.intent_dict:
                 self.intent_dict[example[1]] = self.intent_id
                 self.intent_id += 1
-            fw.write(u"%s\t%s\n" % (self.intent_dict[example[1]], example[0].lower()))
+            fw.write(u"%s\t%s\n" %
+                     (self.intent_dict[example[1]], example[0].lower()))
 
         fw = io.open(self.map_tag_intent, 'w', encoding="utf8")
-        for tag in self.intent_dict: 
+        for tag in self.intent_dict:
             fw.write(u"%s\t%s\n" % (tag, self.intent_dict[tag]))
 
-    def _parser_slot_data(self, examples, data_type): 
+    def _parser_slot_data(self, examples, data_type):
         """
         parser slot dataset
         """
         out_filename = "%s/%s.txt" % (self.out_slot_dir, data_type)
         fw = io.open(out_filename, 'w', encoding="utf8")
-        for example in examples: 
+        for example in examples:
             tags = []
             text = example[0]
             entities = example[2]
-            if not entities: 
+            if not entities:
                 tags = [str(self.slot_dict['O'])] * len(text.strip().split())
                 continue
-            for i in range(len(entities)): 
+            for i in range(len(entities)):
                 enty = entities[i]
                 start = enty['start']
                 value_num = len(enty['value'].split())
                 tags_slot = []
-                for j in range(value_num): 
-                    if j == 0: 
+                for j in range(value_num):
+                    if j == 0:
                         bround_tag = "B"
-                    else: 
+                    else:
                         bround_tag = "I"
                     tag = "%s-%s" % (bround_tag, enty['entity'])
-                    if tag not in self.slot_dict: 
+                    if tag not in self.slot_dict:
                         self.slot_dict[tag] = self.slot_id
                         self.slot_id += 1
                     tags_slot.append(str(self.slot_dict[tag]))
-                if i == 0: 
-                    if start not in [0, 1]: 
-                        prefix_num = len(text[: start].strip().split())
+                if i == 0:
+                    if start not in [0, 1]:
+                        prefix_num = len(text[:start].strip().split())
                         tags.extend([str(self.slot_dict['O'])] * prefix_num)
                     tags.extend(tags_slot)
-                else: 
-                    prefix_num = len(text[entities[i - 1]['end']: start].strip().split())
+                else:
+                    prefix_num = len(text[entities[i - 1]['end']:start].strip()
+                                     .split())
                     tags.extend([str(self.slot_dict['O'])] * prefix_num)
                     tags.extend(tags_slot)
-            if entities[-1]['end'] < len(text): 
+            if entities[-1]['end'] < len(text):
                 suffix_num = len(text[entities[-1]['end']:].strip().split())
                 tags.extend([str(self.slot_dict['O'])] * suffix_num)
-            fw.write(u"%s\t%s\n" % (text.encode('utf8'), " ".join(tags).encode('utf8')))
-        
+            fw.write(u"%s\t%s\n" %
+                     (text.encode('utf8'), " ".join(tags).encode('utf8')))
+
         fw = io.open(self.map_tag_slot, 'w', encoding="utf8")
-        for slot in self.slot_dict: 
+        for slot in self.slot_dict:
             fw.write(u"%s\t%s\n" % (slot, self.slot_dict[slot]))
 
-    def get_train_dataset(self): 
+    def get_train_dataset(self):
         """
         parser train dataset and print train.txt
         """
@@ -133,7 +136,7 @@ class ATIS(object):
         self._parser_intent_data(train_examples, "train")
         self._parser_slot_data(train_examples, "train")
 
-    def get_test_dataset(self): 
+    def get_test_dataset(self):
         """
         parser test dataset and print test.txt
         """
@@ -141,7 +144,7 @@ class ATIS(object):
         self._parser_intent_data(test_examples, "test")
         self._parser_slot_data(test_examples, "test")
 
-    def main(self): 
+    def main(self):
         """
         run data process
         """
@@ -149,10 +152,6 @@ class ATIS(object):
         self.get_test_dataset()
 
 
-if __name__ == "__main__": 
+if __name__ == "__main__":
     atis_inst = ATIS()
     atis_inst.main()
-
-
-
-
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/build_dstc2_dataset.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/build_dstc2_dataset.py
similarity index 75%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/build_dstc2_dataset.py
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/build_dstc2_dataset.py
index 9655ce7268028ac5a30b843105de09c1f13a7b68..15e457deac46e44dba1671250922f1ad4599ad05 100755
--- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/build_dstc2_dataset.py
+++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/build_dstc2_dataset.py
@@ -24,11 +24,12 @@ import re
 import commonlib
 
 
-class DSTC2(object): 
+class DSTC2(object):
     """
     dialogue state tracking dstc2 data process
     """
-    def __init__(self): 
+
+    def __init__(self):
         """
         init instance
         """
@@ -42,16 +43,17 @@ class DSTC2(object):
         self._load_file()
         self._load_ontology()
 
-    def _load_file(self): 
+    def _load_file(self):
         """
         load dataset filename
         """
         self.data_dict = commonlib.load_dict(self.data_list)
-        for data_type in self.data_dict: 
-            for i in range(len(self.data_dict[data_type])): 
-                self.data_dict[data_type][i] = os.path.join(self.src_dir, self.data_dict[data_type][i])
+        for data_type in self.data_dict:
+            for i in range(len(self.data_dict[data_type])):
+                self.data_dict[data_type][i] = os.path.join(
+                    self.src_dir, self.data_dict[data_type][i])
 
-    def _load_ontology(self): 
+    def _load_ontology(self):
         """
         load ontology tag
         """
@@ -60,8 +62,8 @@ class DSTC2(object):
         fr = io.open(self.onto_json, 'r', encoding="utf8")
         ontology = json.load(fr)
         slots_values = ontology['informable']
-        for slot in slots_values: 
-            for value in slots_values[slot]: 
+        for slot in slots_values:
+            for value in slots_values[slot]:
                 key = "%s_%s" % (slot, value)
                 self.map_tag_dict[key] = tag_id
                 tag_id += 1
@@ -69,22 +71,22 @@ class DSTC2(object):
             self.map_tag_dict[key] = tag_id
             tag_id += 1
 
-    def _parser_dataset(self, data_type): 
+    def _parser_dataset(self, data_type):
         """
         parser train dev test dataset
         """
         stat = os.path.exists(self.out_dir)
-        if not stat: 
+        if not stat:
             os.makedirs(self.out_dir)
         asr_stat = os.path.exists(self.out_asr_dir)
-        if not asr_stat: 
+        if not asr_stat:
             os.makedirs(self.out_asr_dir)
         out_file = os.path.join(self.out_dir, "%s.txt" % data_type)
         out_asr_file = os.path.join(self.out_asr_dir, "%s.txt" % data_type)
         fw = io.open(out_file, 'w', encoding="utf8")
         fw_asr = io.open(out_asr_file, 'w', encoding="utf8")
         data_list = self.data_dict.get(data_type)
-        for fn in data_list: 
+        for fn in data_list:
             log_file = os.path.join(fn, "log.json")
             label_file = os.path.join(fn, "label.json")
             f_log = io.open(log_file, 'r', encoding="utf8")
@@ -93,49 +95,59 @@ class DSTC2(object):
             label_json = json.load(f_label)
             session_id = log_json['session-id']
             assert len(label_json["turns"]) == len(log_json["turns"])
-            for i in range(len(label_json["turns"])): 
+            for i in range(len(label_json["turns"])):
                 log_turn = log_json["turns"][i]
                 label_turn = label_json["turns"][i]
                 assert log_turn["turn-index"] == label_turn["turn-index"]
-                labels = ["%s_%s" % (slot, label_turn["goal-labels"][slot]) for slot in label_turn["goal-labels"]]
-                labels_ids = " ".join([str(self.map_tag_dict.get(label, self.map_tag_dict["%s_none" % label.split('_')[0]])) for label in labels])
+                labels = [
+                    "%s_%s" % (slot, label_turn["goal-labels"][slot])
+                    for slot in label_turn["goal-labels"]
+                ]
+                labels_ids = " ".join([
+                    str(
+                        self.map_tag_dict.get(label, self.map_tag_dict[
+                            "%s_none" % label.split('_')[0]]))
+                    for label in labels
+                ])
                 mach = log_turn['output']['transcript']
                 user = label_turn['transcription']
-                if not labels_ids.strip(): 
+                if not labels_ids.strip():
                     labels_ids = self.map_tag_dict['none']
                 out = "%s\t%s\1%s\t%s" % (session_id, mach, user, labels_ids)
-                user_asr = log_turn['input']['live']['asr-hyps'][0]['asr-hyp'].strip()
-                out_asr = "%s\t%s\1%s\t%s" % (session_id, mach, user_asr, labels_ids)
+                user_asr = log_turn['input']['live']['asr-hyps'][0][
+                    'asr-hyp'].strip()
+                out_asr = "%s\t%s\1%s\t%s" % (session_id, mach, user_asr,
+                                              labels_ids)
                 fw.write(u"%s\n" % out.encode('utf8'))
                 fw_asr.write(u"%s\n" % out_asr.encode('utf8'))
 
-    def get_train_dataset(self): 
+    def get_train_dataset(self):
         """
         parser train dataset and print train.txt
         """
         self._parser_dataset("train")
 
-    def get_dev_dataset(self): 
+    def get_dev_dataset(self):
         """
         parser dev dataset and print dev.txt
         """
         self._parser_dataset("dev")
 
-    def get_test_dataset(self): 
+    def get_test_dataset(self):
         """
         parser test dataset and print test.txt
         """
         self._parser_dataset("test")
 
-    def get_labels(self): 
+    def get_labels(self):
         """
         get tag and map ids file
         """
         fw = io.open(self.map_tag, 'w', encoding="utf8")
-        for elem in self.map_tag_dict: 
+        for elem in self.map_tag_dict:
             fw.write(u"%s\t%s\n" % (elem, self.map_tag_dict[elem]))
 
-    def main(self): 
+    def main(self):
         """
         run data process
         """
@@ -144,10 +156,7 @@ class DSTC2(object):
         self.get_test_dataset()
         self.get_labels()
 
-if __name__ == "__main__": 
+
+if __name__ == "__main__":
     dstc_inst = DSTC2()
     dstc_inst.main()
-
-
-
-
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/build_mrda_dataset.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/build_mrda_dataset.py
similarity index 81%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/build_mrda_dataset.py
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/build_mrda_dataset.py
index e5c0406fce45637364e3dc8ed7cc2ed7739c15a1..8a6419d8ea4b631688c3361d0a96a4b44722682b 100755
--- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/build_mrda_dataset.py
+++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/build_mrda_dataset.py
@@ -23,11 +23,12 @@ import re
 import commonlib
 
 
-class MRDA(object): 
+class MRDA(object):
     """
     dialogue act dataset mrda data process
     """
-    def __init__(self): 
+
+    def __init__(self):
         """
         init instance
         """
@@ -41,7 +42,7 @@ class MRDA(object):
         self._load_file()
         self.tag_dict = commonlib.load_voc(self.voc_map_tag)
 
-    def _load_file(self): 
+    def _load_file(self):
         """
         load dataset filename
         """
@@ -49,30 +50,30 @@ class MRDA(object):
         self.trans_dict = {}
         self.data_dict = commonlib.load_dict(self.data_list)
         file_list, file_path = commonlib.get_file_list(self.src_dir)
-        for i in range(len(file_list)): 
+        for i in range(len(file_list)):
             name = file_list[i]
             keyword = name.split('.')[0]
-            if 'dadb' in name: 
+            if 'dadb' in name:
                 self.dadb_dict[keyword] = file_path[i]
-            if 'trans' in name: 
+            if 'trans' in name:
                 self.trans_dict[keyword] = file_path[i]
 
-    def load_dadb(self, data_type): 
+    def load_dadb(self, data_type):
         """
         load dadb dataset
         """
         dadb_dict = {}
         conv_id_list = []
         dadb_list = self.data_dict[data_type]
-        for dadb_key in dadb_list: 
+        for dadb_key in dadb_list:
             dadb_file = self.dadb_dict[dadb_key]
             fr = io.open(dadb_file, 'r', encoding="utf8")
-            row = csv.reader(fr, delimiter = ',')
-            for line in row: 
+            row = csv.reader(fr, delimiter=',')
+            for line in row:
                 elems = line
                 conv_id = elems[2]
                 conv_id_list.append(conv_id)
-                if len(elems) != 14: 
+                if len(elems) != 14:
                     continue
                 error_code = elems[3]
                 da_tag = elems[-9]
@@ -80,17 +81,17 @@ class MRDA(object):
                 dadb_dict[conv_id] = (error_code, da_ori_tag, da_tag)
         return dadb_dict, conv_id_list
 
-    def load_trans(self, data_type): 
+    def load_trans(self, data_type):
         """load trans data"""
         trans_dict = {}
         trans_list = self.data_dict[data_type]
-        for trans_key in trans_list: 
+        for trans_key in trans_list:
             trans_file = self.trans_dict[trans_key]
             fr = io.open(trans_file, 'r', encoding="utf8")
-            row = csv.reader(fr, delimiter = ',')
-            for line in row: 
+            row = csv.reader(fr, delimiter=',')
+            for line in row:
                 elems = line
-                if len(elems) != 3: 
+                if len(elems) != 3:
                     continue
                 conv_id = elems[0]
                 text = elems[1]
@@ -98,7 +99,7 @@ class MRDA(object):
                 trans_dict[conv_id] = (text, text_process)
         return trans_dict
 
-    def _parser_dataset(self, data_type): 
+    def _parser_dataset(self, data_type):
         """
         parser train dev test dataset
         """
@@ -106,50 +107,51 @@ class MRDA(object):
         dadb_dict, conv_id_list = self.load_dadb(data_type)
         trans_dict = self.load_trans(data_type)
         fw = io.open(out_filename, 'w', encoding="utf8")
-        for elem in conv_id_list: 
+        for elem in conv_id_list:
             v_dadb = dadb_dict[elem]
             v_trans = trans_dict[elem]
             da_tag = v_dadb[2]
-            if da_tag not in self.tag_dict: 
+            if da_tag not in self.tag_dict:
                 continue
             tag = self.tag_dict[da_tag]
-            if tag == "Z": 
+            if tag == "Z":
                 continue
-            if tag not in self.map_tag_dict: 
+            if tag not in self.map_tag_dict:
                 self.map_tag_dict[tag] = self.tag_id
                 self.tag_id += 1
             caller = elem.split('_')[0].split('-')[-1]
             conv_no = elem.split('_')[0].split('-')[0]
-            out = "%s\t%s\t%s\t%s" % (conv_no, self.map_tag_dict[tag], caller, v_trans[0])
+            out = "%s\t%s\t%s\t%s" % (conv_no, self.map_tag_dict[tag], caller,
+                                      v_trans[0])
             fw.write(u"%s\n" % out)
 
-    def get_train_dataset(self): 
+    def get_train_dataset(self):
         """
         parser train dataset and print train.txt
         """
         self._parser_dataset("train")
 
-    def get_dev_dataset(self): 
+    def get_dev_dataset(self):
         """
         parser dev dataset and print dev.txt
         """
         self._parser_dataset("dev")
 
-    def get_test_dataset(self): 
+    def get_test_dataset(self):
         """
         parser test dataset and print test.txt
         """
         self._parser_dataset("test")
 
-    def get_labels(self): 
+    def get_labels(self):
         """
         get tag and map ids file
         """
         fw = io.open(self.map_tag, 'w', encoding="utf8")
-        for elem in self.map_tag_dict: 
+        for elem in self.map_tag_dict:
             fw.write(u"%s\t%s\n" % (elem, self.map_tag_dict[elem]))
 
-    def main(self): 
+    def main(self):
         """
         run data process
         """
@@ -158,10 +160,7 @@ class MRDA(object):
         self.get_test_dataset()
         self.get_labels()
 
-if __name__ == "__main__": 
+
+if __name__ == "__main__":
     mrda_inst = MRDA()
     mrda_inst.main()
-
-
-
-
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/build_swda_dataset.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/build_swda_dataset.py
similarity index 80%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/build_swda_dataset.py
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/build_swda_dataset.py
index 441d2852c760e9cef31147e666855f89dba406bb..913c4ade38785612592e6ef21a7bea740601a09e 100755
--- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/build_swda_dataset.py
+++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/build_swda_dataset.py
@@ -23,11 +23,12 @@ import re
 import commonlib
 
 
-class SWDA(object): 
+class SWDA(object):
     """
     dialogue act dataset swda data process
     """
-    def __init__(self): 
+
+    def __init__(self):
         """
         init instance
         """
@@ -39,94 +40,94 @@ class SWDA(object):
         self.src_dir = "../../data/input/data/swda/source_data/swda"
         self._load_file()
 
-    def _load_file(self): 
+    def _load_file(self):
         """
         load dataset filename
         """
         self.data_dict = commonlib.load_dict(self.data_list)
         self.file_dict = {}
         child_dir = commonlib.get_dir_list(self.src_dir)
-        for chd in child_dir: 
+        for chd in child_dir:
             file_list, file_path = commonlib.get_file_list(chd)
-            for i in range(len(file_list)): 
-                name = file_list[i] 
+            for i in range(len(file_list)):
+                name = file_list[i]
                 keyword = "sw%s" % name.split('.')[0].split('_')[-1]
                 self.file_dict[keyword] = file_path[i]
 
-    def _parser_dataset(self, data_type): 
+    def _parser_dataset(self, data_type):
         """
         parser train dev test dataset
         """
         out_filename = "%s/%s.txt" % (self.out_dir, data_type)
         fw = io.open(out_filename, 'w', encoding='utf8')
-        for name in self.data_dict[data_type]: 
+        for name in self.data_dict[data_type]:
             file_path = self.file_dict[name]
             fr = io.open(file_path, 'r', encoding="utf8")
             idx = 0
-            row = csv.reader(fr, delimiter = ',')
-            for r in row: 
-                if idx == 0: 
+            row = csv.reader(fr, delimiter=',')
+            for r in row:
+                if idx == 0:
                     idx += 1
                     continue
                 out = self._parser_utterence(r)
                 fw.write(u"%s\n" % out)
 
-    def _clean_text(self, text): 
+    def _clean_text(self, text):
         """
         text cleaning for dialogue act dataset
         """
-        if text.startswith('<') and text.endswith('>.'): 
+        if text.startswith('<') and text.endswith('>.'):
             return text
         if "[" in text or "]" in text:
             stat = True
-        else: 
+        else:
             stat = False
         group = re.findall("\[.*?\+.*?\]", text)
-        while group and stat: 
-            for elem in group: 
+        while group and stat:
+            for elem in group:
                 elem_src = elem
                 elem = re.sub('\+', '', elem.lstrip('[').rstrip(']'))
                 text = text.replace(elem_src, elem)
-            if "[" in text or "]" in text: 
+            if "[" in text or "]" in text:
                 stat = True
-            else: 
+            else:
                 stat = False
             group = re.findall("\[.*?\+.*?\]", text)
-        if "{" in text or "}" in text: 
+        if "{" in text or "}" in text:
             stat = True
-        else: 
+        else:
             stat = False
         group = re.findall("{[A-Z].*?}", text)
-        while group and stat: 
+        while group and stat:
             child_group = re.findall("{[A-Z]*(.*?)}", text)
-            for i in range(len(group)):  
+            for i in range(len(group)):
                 text = text.replace(group[i], child_group[i])
-            if "{" in text or "}" in text: 
+            if "{" in text or "}" in text:
                 stat = True
-            else: 
+            else:
                 stat = False
             group = re.findall("{[A-Z].*?}", text)
-        if "(" in text or ")" in text: 
+        if "(" in text or ")" in text:
             stat = True
-        else: 
+        else:
             stat = False
         group = re.findall("\(\(.*?\)\)", text)
-        while group and stat: 
-            for elem in group: 
-                if elem: 
+        while group and stat:
+            for elem in group:
+                if elem:
                     elem_clean = re.sub("\(|\)", "", elem)
                     text = text.replace(elem, elem_clean)
-                else: 
+                else:
                     text = text.replace(elem, "mumblex")
             if "(" in text or ")" in text:
                 stat = True
-            else: 
+            else:
                 stat = False
             group = re.findall("\(\((.*?)\)\)", text)
 
         group = re.findall("\<.*?\>", text)
-        if group: 
-            for elem in group: 
+        if group:
+            for elem in group:
                 text = text.replace(elem, "")
 
         text = re.sub(r" \'s", "\'s", text)
@@ -137,24 +138,24 @@ class SWDA(object):
         text = re.sub("\[|\]|\+|\>|\<|\{|\}", "", text)
         return text.strip().lower()
 
-    def _map_tag(self, da_tag): 
+    def _map_tag(self, da_tag):
         """
         map tag to 42 classes
         """
         curr_da_tags = []
         curr_das = re.split(r"\s*[,;]\s*", da_tag)
-        for curr_da in curr_das: 
+        for curr_da in curr_das:
             if curr_da == "qy_d" or curr_da == "qw^d" or curr_da == "b^m":
                 pass
             elif curr_da == "nn^e":
                 curr_da = "ng"
             elif curr_da == "ny^e":
                 curr_da = "na"
-            else: 
+            else:
                 curr_da = re.sub(r'(.)\^.*', r'\1', curr_da)
                 curr_da = re.sub(r'[\(\)@*]', '', curr_da)
                 tag = curr_da
-                if tag in ('qr', 'qy'): 
+                if tag in ('qr', 'qy'):
                     tag = 'qy'
                 elif tag in ('fe', 'ba'):
                     tag = 'ba'
@@ -170,12 +171,12 @@ class SWDA(object):
                     tag = 'fo_o_fw_"_by_bc'
                 curr_da = tag
             curr_da_tags.append(curr_da)
-        if curr_da_tags[0] not in self.map_tag_dict: 
+        if curr_da_tags[0] not in self.map_tag_dict:
             self.map_tag_dict[curr_da_tags[0]] = self.tag_id
             self.tag_id += 1
         return self.map_tag_dict[curr_da_tags[0]]
-    
-    def _parser_utterence(self, line): 
+
+    def _parser_utterence(self, line):
         """
         parser one turn dialogue
         """
@@ -188,34 +189,34 @@ class SWDA(object):
 
         out = "%s\t%s\t%s\t%s" % (conversation_no, act_tag, caller, text)
         return out
-        
-    def get_train_dataset(self): 
+
+    def get_train_dataset(self):
         """
         parser train dataset and print train.txt
         """
         self._parser_dataset("train")
 
-    def get_dev_dataset(self): 
+    def get_dev_dataset(self):
         """
         parser dev dataset and print dev.txt
         """
         self._parser_dataset("dev")
 
-    def get_test_dataset(self): 
+    def get_test_dataset(self):
         """
         parser test dataset and print test.txt
         """
         self._parser_dataset("test")
 
-    def get_labels(self): 
+    def get_labels(self):
         """
         get tag and map ids file
         """
         fw = io.open(self.map_tag, 'w', encoding='utf8')
-        for elem in self.map_tag_dict: 
+        for elem in self.map_tag_dict:
             fw.write(u"%s\t%s\n" % (elem, self.map_tag_dict[elem]))
 
-    def main(self): 
+    def main(self):
         """
         run data process
         """
@@ -224,10 +225,7 @@ class SWDA(object):
         self.get_test_dataset()
         self.get_labels()
 
-if __name__ == "__main__": 
+
+if __name__ == "__main__":
     swda_inst = SWDA()
     swda_inst.main()
-
-
-
-
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/commonlib.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/commonlib.py
similarity index 86%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/commonlib.py
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/commonlib.py
index b223a9f2b4eb0b8c9e650759a7b39cc5282cdceb..fd07b4a710d1b000a4896cc3f34785d127ac606f 100755
--- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/commonlib.py
+++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/commonlib.py
@@ -25,52 +25,49 @@ def get_file_list(dir_name):
     file_list = list()
     file_path = list()
     for root, dirs, files in os.walk(dir_name):
-        for file in files: 
+        for file in files:
             file_list.append(file)
             file_path.append(os.path.join(root, file))
     return file_list, file_path
 
 
-def get_dir_list(dir_name): 
+def get_dir_list(dir_name):
     """
     get directory names
     """
     child_dir = []
     dir_list = os.listdir(dir_name)
-    for cur_file in dir_list: 
+    for cur_file in dir_list:
         path = os.path.join(dir_name, cur_file)
-        if not os.path.isdir(path): 
+        if not os.path.isdir(path):
             continue
         child_dir.append(path)
     return child_dir
 
 
-def load_dict(conf): 
+def load_dict(conf):
     """
     load swda dataset config
     """
     conf_dict = dict()
     fr = io.open(conf, 'r', encoding="utf8")
-    for line in fr: 
+    for line in fr:
         line = line.strip()
         elems = line.split('\t')
-        if elems[0] not in conf_dict: 
+        if elems[0] not in conf_dict:
             conf_dict[elems[0]] = []
         conf_dict[elems[0]].append(elems[1])
     return conf_dict
 
 
-def load_voc(conf): 
+def load_voc(conf):
     """
     load map dict
     """
     map_dict = {}
     fr = io.open(conf, 'r', encoding="utf8")
-    for line in fr:   
+    for line in fr:
         line = line.strip()
         elems = line.split('\t')
         map_dict[elems[0]] = elems[1]
     return map_dict
-
-
-
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/conf/dstc2.conf b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/conf/dstc2.conf
similarity index 100%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/conf/dstc2.conf
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/conf/dstc2.conf
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/conf/mrda.conf b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/conf/mrda.conf
similarity index 100%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/conf/mrda.conf
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/conf/mrda.conf
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/conf/multi-woz.conf b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/conf/multi-woz.conf
similarity index 100%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/conf/multi-woz.conf
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/conf/multi-woz.conf
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/conf/swda.conf b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/conf/swda.conf
similarity index 100%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/conf/swda.conf
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/conf/swda.conf
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/run_build_data.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/run_build_data.py
similarity index 80%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/run_build_data.py
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/run_build_data.py
index b1a61a0f9938bd7fb647194f3902ae62d5c6b509..273a39513e7550d5fa507d854a0e6611c7f1b16f 100755
--- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/run_build_data.py
+++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/run_build_data.py
@@ -20,29 +20,29 @@ from build_dstc2_dataset import DSTC2
 from build_mrda_dataset import MRDA
 from build_swda_dataset import SWDA
 
-
-if __name__ == "__main__": 
+if __name__ == "__main__":
     task_name = sys.argv[1]
     task_name = task_name.lower()
-    
-    if task_name not in ['swda', 'mrda', 'atis', 'dstc2', 'udc']: 
+
+    if task_name not in ['swda', 'mrda', 'atis', 'dstc2', 'udc']:
         print("task name error: we support [swda|mrda|atis|dstc2|udc]")
         exit(1)
-    
-    if task_name == 'swda': 
+
+    if task_name == 'swda':
         swda_inst = SWDA()
         swda_inst.main()
-    elif task_name == 'mrda': 
+    elif task_name == 'mrda':
         mrda_inst = MRDA()
         mrda_inst.main()
-    elif task_name == 'atis': 
+    elif task_name == 'atis':
         atis_inst = ATIS()
         atis_inst.main()
-        shutil.copyfile("../../data/input/data/atis/atis_slot/test.txt", "../../data/input/data/atis/atis_slot/dev.txt")
-        shutil.copyfile("../../data/input/data/atis/atis_intent/test.txt", "../../data/input/data/atis/atis_intent/dev.txt")
-    elif task_name == 'dstc2': 
+        shutil.copyfile("../../data/input/data/atis/atis_slot/test.txt",
+                        "../../data/input/data/atis/atis_slot/dev.txt")
+        shutil.copyfile("../../data/input/data/atis/atis_intent/test.txt",
+                        "../../data/input/data/atis/atis_intent/dev.txt")
+    elif task_name == 'dstc2':
         dstc_inst = DSTC2()
         dstc_inst.main()
-    else: 
+    else:
         exit(0)
-
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/tokenization.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/tokenization.py
similarity index 99%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/tokenization.py
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/tokenization.py
index 8268f8e8ec6f86513344c1523dcd703abeb61c44..175979178a18f9635a4b206d695f2a7547c66786 100644
--- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/tokenization.py
+++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/tokenization.py
@@ -12,7 +12,6 @@
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # See the License for the specific language governing permissions and
 # limitations under the License.
-
 """Tokenization classes."""
 
 from __future__ import absolute_import
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/transformer_encoder.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/transformer_encoder.py
similarity index 99%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/transformer_encoder.py
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/transformer_encoder.py
index beb6cccf1143bbbf287dc88f4d121a9bcc7b1cf8..7bdb32c6ab054a46296a9eb14a75bdc0763fbeea 100644
--- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/transformer_encoder.py
+++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/transformer_encoder.py
@@ -113,7 +113,7 @@ def multi_head_attention(queries,
         """
         Scaled Dot-Product Attention
         """
-        scaled_q = layers.scale(x=q, scale=d_key ** -0.5)
+        scaled_q = layers.scale(x=q, scale=d_key**-0.5)
         product = layers.matmul(x=scaled_q, y=k, transpose_y=True)
         if attn_bias:
             product += attn_bias
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/utils/__init__.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/utils/__init__.py
similarity index 100%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/utils/__init__.py
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/utils/__init__.py
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/utils/configure.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/utils/configure.py
similarity index 100%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/utils/configure.py
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/utils/configure.py
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/utils/fp16.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/utils/fp16.py
similarity index 100%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/utils/fp16.py
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/utils/fp16.py
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/utils/input_field.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/utils/input_field.py
similarity index 94%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/utils/input_field.py
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/utils/input_field.py
index 28a68854d4de68d0d765f1f12454d8817d9eedbf..c36f74566048a6bfe942922f0baa28f2dd421f3a 100644
--- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/utils/input_field.py
+++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/utils/input_field.py
@@ -25,8 +25,8 @@ import numpy as np
 import paddle.fluid as fluid
 
 
-class InputField(object): 
-    def __init__(self, input_field): 
+class InputField(object):
+    def __init__(self, input_field):
         """init inpit field"""
         self.src_ids = input_field[0]
         self.pos_ids = input_field[1]
diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/utils/model_check.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/utils/model_check.py
similarity index 99%
rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/utils/model_check.py
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/utils/model_check.py
index 013130cbb0a9f4d44edc28589bf83672b69abf26..dacf1a668238a22d94fbefb9f52b05620581cad2 100644
--- a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/utils/model_check.py
+++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/utils/model_check.py
@@ -30,7 +30,7 @@ def check_cuda(use_cuda, err = \
 
 
 if __name__ == "__main__":
-    
+
     check_cuda(True)
 
     check_cuda(False)
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/utils/py23.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/utils/py23.py
similarity index 100%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/utils/py23.py
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/utils/py23.py
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/utils/save_load_io.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/utils/save_load_io.py
similarity index 98%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/utils/save_load_io.py
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/utils/save_load_io.py
index 7fb9adc6858c86c82d0b9e22dff7a714b642531c..bdcdd811dd1ebb27542e5facc3ba050f00df08f8 100644
--- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/utils/save_load_io.py
+++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/utils/save_load_io.py
@@ -69,8 +69,8 @@ def init_from_checkpoint(args, exe, program):
 def init_from_params(args, exe, program):
 
     assert isinstance(args.init_from_params, str)
-    
-    if not os.path.exists(args.init_from_params): 
+
+    if not os.path.exists(args.init_from_params):
         raise Warning("the params path does not exist.")
         return False
 
@@ -113,7 +113,7 @@ def save_param(args, exe, program, dirname):
 
     if not os.path.exists(param_dir):
         os.makedirs(param_dir)
-    
+
     fluid.io.save_params(
         exe,
         os.path.join(param_dir, dirname),
@@ -122,5 +122,3 @@ def save_param(args, exe, program, dirname):
     print("save parameters at %s" % (os.path.join(param_dir, dirname)))
 
     return True
-
-
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu_net.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu_net.py
similarity index 79%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu_net.py
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu_net.py
index 0634e0e3fed653b20f28774c582c3a47b8103dc9..2b6215df42a3b47517050e4c98d571dc3f8d5f1d 100644
--- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu_net.py
+++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu_net.py
@@ -23,14 +23,9 @@ from dgu.bert import BertModel
 from dgu.utils.configure import JsonConfig
 
 
-def create_net(
-        is_training,
-        model_input,
-        num_labels,
-        paradigm_inst,
-        args): 
+def create_net(is_training, model_input, num_labels, paradigm_inst, args):
     """create dialogue task model"""
-    
+
     src_ids = model_input.src_ids
     pos_ids = model_input.pos_ids
     sent_ids = model_input.sent_ids
@@ -48,14 +43,15 @@ def create_net(
         config=bert_conf,
         use_fp16=False)
 
-    params = {'num_labels': num_labels,
-              'src_ids': src_ids,
-              'pos_ids': pos_ids,
-              'sent_ids': sent_ids,
-              'input_mask': input_mask,
-              'labels': labels,
-              'is_training': is_training}
+    params = {
+        'num_labels': num_labels,
+        'src_ids': src_ids,
+        'pos_ids': pos_ids,
+        'sent_ids': sent_ids,
+        'input_mask': input_mask,
+        'labels': labels,
+        'is_training': is_training
+    }
 
     results = paradigm_inst.paradigm(bert, params)
     return results
-
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/eval.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/eval.py
similarity index 94%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/eval.py
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/eval.py
index 94d7be51f7dc76638bdc155d8cd516f5decb59f3..73c044750f655d0cdc1e23ccdb1ece44d65c5f3d 100644
--- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/eval.py
+++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/eval.py
@@ -20,17 +20,17 @@ from dgu.evaluation import evaluate
 from dgu.utils.configure import PDConfig
 
 
-def do_eval(args): 
+def do_eval(args):
 
     task_name = args.task_name.lower()
     reference = args.evaluation_file
     predicitions = args.output_prediction_file
-    
+
     evaluate(task_name, predicitions, reference)
 
 
-if __name__ == "__main__": 
-    
+if __name__ == "__main__":
+
     args = PDConfig(yaml_file="./data/config/dgu.yaml")
     args.build()
 
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/images/dgu.png b/PaddleNLP/dialogue_system/dialogue_general_understanding/images/dgu.png
similarity index 100%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/images/dgu.png
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/images/dgu.png
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/inference_model.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/inference_model.py
similarity index 67%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/inference_model.py
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/inference_model.py
index 01e6d96132998b36a3a3db55289cbe9e202e831c..438ebcc730f57ea27a5821b3aa873c4a0a413dba 100644
--- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/inference_model.py
+++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/inference_model.py
@@ -29,10 +29,10 @@ import dgu.utils.save_load_io as save_load_io
 
 import dgu.reader as reader
 from dgu_net import create_net
-import dgu.define_paradigm as define_paradigm 
+import dgu.define_paradigm as define_paradigm
 
 
-def do_save_inference_model(args): 
+def do_save_inference_model(args):
     """save inference model function"""
 
     task_name = args.task_name.lower()
@@ -57,35 +57,36 @@ def do_save_inference_model(args):
         with fluid.unique_name.guard():
 
             # define inputs of the network
-            num_labels = len(processors[task_name].get_labels()) 
+            num_labels = len(processors[task_name].get_labels())
 
             src_ids = fluid.data(
-                        name='src_ids', shape=[-1, args.max_seq_len], dtype='int64')
+                name='src_ids', shape=[-1, args.max_seq_len], dtype='int64')
             pos_ids = fluid.data(
-                        name='pos_ids', shape=[-1, args.max_seq_len], dtype='int64')
+                name='pos_ids', shape=[-1, args.max_seq_len], dtype='int64')
             sent_ids = fluid.data(
-                        name='sent_ids', shape=[-1, args.max_seq_len], dtype='int64')
+                name='sent_ids', shape=[-1, args.max_seq_len], dtype='int64')
             input_mask = fluid.data(
-                        name='input_mask', shape=[-1, args.max_seq_len], dtype='float32')
-            if args.task_name == 'atis_slot': 
+                name='input_mask',
+                shape=[-1, args.max_seq_len],
+                dtype='float32')
+            if args.task_name == 'atis_slot':
                 labels = fluid.data(
-                        name='labels', shape=[-1, args.max_seq_len], dtype='int64')
+                    name='labels', shape=[-1, args.max_seq_len], dtype='int64')
             elif args.task_name in ['dstc2', 'dstc2_asr', 'multi-woz']:
                 labels = fluid.data(
-                        name='labels', shape=[-1, num_labels], dtype='int64')
-            else: 
-                labels = fluid.data(
-                        name='labels', shape=[-1, 1], dtype='int64')
-            
+                    name='labels', shape=[-1, num_labels], dtype='int64')
+            else:
+                labels = fluid.data(name='labels', shape=[-1, 1], dtype='int64')
+
             input_inst = [src_ids, pos_ids, sent_ids, input_mask, labels]
             input_field = InputField(input_inst)
-            
+
             results = create_net(
-                    is_training=False, 
-                    model_input=input_field, 
-                    num_labels=num_labels,
-                    paradigm_inst=paradigm_inst,
-                    args=args)
+                is_training=False,
+                model_input=input_field,
+                num_labels=num_labels,
+                paradigm_inst=paradigm_inst,
+                args=args)
             probs = results.get("probs", None)
 
     if args.use_cuda:
@@ -97,7 +98,7 @@ def do_save_inference_model(args):
     exe.run(startup_prog)
 
     assert (args.init_from_params) or (args.init_from_pretrain_model)
-    
+
     if args.init_from_params:
         save_load_io.init_from_params(args, exe, test_prog)
     elif args.init_from_pretrain_model:
@@ -105,20 +106,16 @@ def do_save_inference_model(args):
 
     # saving inference model
     fluid.io.save_inference_model(
-            args.inference_model_dir,
-            feeded_var_names=[
-                input_field.src_ids.name, 
-                input_field.pos_ids.name,
-                input_field.sent_ids.name, 
-                input_field.input_mask.name
-            ],
-            target_vars=[
-                probs
-            ],
-            executor=exe,
-            main_program=test_prog,
-            model_filename="model.pdmodel",
-            params_filename="params.pdparams")
+        args.inference_model_dir,
+        feeded_var_names=[
+            input_field.src_ids.name, input_field.pos_ids.name,
+            input_field.sent_ids.name, input_field.input_mask.name
+        ],
+        target_vars=[probs],
+        executor=exe,
+        main_program=test_prog,
+        model_filename="model.pdmodel",
+        params_filename="params.pdparams")
 
     print("save inference model at %s" % (args.inference_model_dir))
 
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/main.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/main.py
similarity index 99%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/main.py
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/main.py
index 0b7b43f3c7daaf350319ee2bca0ba1de1d2e17de..bf1cf3b26646eb03b74f857ec2323a4d4ea33f0b 100644
--- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/main.py
+++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/main.py
@@ -26,7 +26,6 @@ from inference_model import do_save_inference_model
 
 from dgu.utils.configure import PDConfig
 
-
 if __name__ == "__main__":
 
     args = PDConfig(yaml_file="./data/config/dgu.yaml")
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/predict.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/predict.py
similarity index 100%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/predict.py
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/predict.py
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/run.sh b/PaddleNLP/dialogue_system/dialogue_general_understanding/run.sh
similarity index 100%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/run.sh
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/run.sh
diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/train.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/train.py
similarity index 74%
rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/train.py
rename to PaddleNLP/dialogue_system/dialogue_general_understanding/train.py
index 7401cee810cdf25408780d7cc6bfa059a0df5e4a..2ea2a0395a26400ac29d1adadf47f7cea2d9ec32 100644
--- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/train.py
+++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/train.py
@@ -28,7 +28,7 @@ import paddle.fluid as fluid
 from dgu_net import create_net
 import dgu.reader as reader
 from dgu.optimization import optimization
-import dgu.define_paradigm as define_paradigm 
+import dgu.define_paradigm as define_paradigm
 from dgu.utils.configure import PDConfig
 from dgu.utils.input_field import InputField
 from dgu.utils.model_check import check_cuda
@@ -37,7 +37,7 @@ import dgu.utils.save_load_io as save_load_io
 
 def do_train(args):
     """train function"""
-    
+
     task_name = args.task_name.lower()
     paradigm_inst = define_paradigm.Paradigm(task_name)
 
@@ -53,34 +53,35 @@ def do_train(args):
     train_prog = fluid.default_main_program()
     startup_prog = fluid.default_startup_program()
 
-    with fluid.program_guard(train_prog, startup_prog): 
+    with fluid.program_guard(train_prog, startup_prog):
         train_prog.random_seed = args.random_seed
         startup_prog.random_seed = args.random_seed
-        with fluid.unique_name.guard(): 
+        with fluid.unique_name.guard():
             num_labels = len(processors[task_name].get_labels())
 
             src_ids = fluid.data(
-                        name='src_ids', shape=[-1, args.max_seq_len], dtype='int64')
+                name='src_ids', shape=[-1, args.max_seq_len], dtype='int64')
             pos_ids = fluid.data(
-                        name='pos_ids', shape=[-1, args.max_seq_len], dtype='int64')
+                name='pos_ids', shape=[-1, args.max_seq_len], dtype='int64')
             sent_ids = fluid.data(
-                        name='sent_ids', shape=[-1, args.max_seq_len], dtype='int64')
+                name='sent_ids', shape=[-1, args.max_seq_len], dtype='int64')
             input_mask = fluid.data(
-                        name='input_mask', shape=[-1, args.max_seq_len], dtype='float32')
-            if args.task_name == 'atis_slot': 
+                name='input_mask',
+                shape=[-1, args.max_seq_len],
+                dtype='float32')
+            if args.task_name == 'atis_slot':
                 labels = fluid.data(
-                        name='labels', shape=[-1, args.max_seq_len], dtype='int64')
+                    name='labels', shape=[-1, args.max_seq_len], dtype='int64')
             elif args.task_name in ['dstc2']:
                 labels = fluid.data(
-                        name='labels', shape=[-1, num_labels], dtype='int64')
-            else: 
-                labels = fluid.data(
-                        name='labels', shape=[-1, 1], dtype='int64')
-            
+                    name='labels', shape=[-1, num_labels], dtype='int64')
+            else:
+                labels = fluid.data(name='labels', shape=[-1, 1], dtype='int64')
+
             input_inst = [src_ids, pos_ids, sent_ids, input_mask, labels]
             input_field = InputField(input_inst)
-            data_reader = fluid.io.PyReader(feed_list=input_inst, 
-                        capacity=4, iterable=False)
+            data_reader = fluid.io.PyReader(
+                feed_list=input_inst, capacity=4, iterable=False)
             processor = processors[task_name](data_dir=args.data_dir,
                                               vocab_path=args.vocab_path,
                                               max_seq_len=args.max_seq_len,
@@ -90,12 +91,12 @@ def do_train(args):
                                               random_seed=args.random_seed)
 
             results = create_net(
-                    is_training=True, 
-                    model_input=input_field, 
-                    num_labels=num_labels,
-                    paradigm_inst=paradigm_inst,
-                    args=args)
-            
+                is_training=True,
+                model_input=input_field,
+                num_labels=num_labels,
+                paradigm_inst=paradigm_inst,
+                args=args)
+
             loss = results.get("loss", None)
             probs = results.get("probs", None)
             accuracy = results.get("accuracy", None)
@@ -103,21 +104,19 @@ def do_train(args):
 
             loss.persistable = True
             probs.persistable = True
-            if accuracy: 
+            if accuracy:
                 accuracy.persistable = True
             num_seqs.persistable = True
 
-            if args.use_cuda: 
+            if args.use_cuda:
                 dev_count = fluid.core.get_cuda_device_count()
-            else: 
+            else:
                 dev_count = int(os.environ.get('CPU_NUM', 1))
-            
+
             batch_generator = processor.data_generator(
-                batch_size=args.batch_size,
-                phase='train',
-                shuffle=True)
+                batch_size=args.batch_size, phase='train', shuffle=True)
             num_train_examples = processor.get_num_examples(phase='train')
-            
+
             if args.in_tokens:
                 max_train_steps = args.epoch * num_train_examples // (
                     args.batch_size // args.max_seq_len) // dev_count
@@ -147,32 +146,32 @@ def do_train(args):
         place = fluid.CUDAPlace(int(os.getenv('FLAGS_selected_gpus', '0')))
     else:
         place = fluid.CPUPlace()
-    
+
     exe = fluid.Executor(place)
     exe.run(startup_prog)
 
     assert (args.init_from_checkpoint == "") or (
-            args.init_from_pretrain_model == "")
+        args.init_from_pretrain_model == "")
 
     # init from some checkpoint, to resume the previous training
-    if args.init_from_checkpoint: 
+    if args.init_from_checkpoint:
         save_load_io.init_from_checkpoint(args, exe, train_prog)
-    
+
     # init from some pretrain models, to better solve the current task
-    if args.init_from_pretrain_model: 
+    if args.init_from_pretrain_model:
         save_load_io.init_from_pretrain_model(args, exe, train_prog)
 
     build_strategy = fluid.compiler.BuildStrategy()
     build_strategy.enable_inplace = True
 
     compiled_train_prog = fluid.CompiledProgram(train_prog).with_data_parallel(
-                loss_name=loss.name, build_strategy=build_strategy)
-    
+        loss_name=loss.name, build_strategy=build_strategy)
+
     # start training
     steps = 0
     time_begin = time.time()
     ce_info = []
-    for epoch_step in range(args.epoch): 
+    for epoch_step in range(args.epoch):
         data_reader.start()
         while True:
             try:
@@ -216,43 +215,38 @@ def do_train(args):
                     used_time = time_end - time_begin
                     current_time = time.strftime('%Y-%m-%d %H:%M:%S',
                                                  time.localtime(time.time()))
-                    if accuracy is not None: 
-                        print(
-                            "%s epoch: %d, step: %d, ave loss: %f, "
-                            "ave acc: %f, speed: %f steps/s" %
-                            (current_time, epoch_step, steps,
-                             np.mean(np_loss),
-                             np.mean(np_acc),
-                             args.print_steps / used_time))
+                    if accuracy is not None:
+                        print("%s epoch: %d, step: %d, ave loss: %f, "
+                              "ave acc: %f, speed: %f steps/s" %
+                              (current_time, epoch_step, steps,
+                               np.mean(np_loss), np.mean(np_acc),
+                               args.print_steps / used_time))
                         ce_info.append([
-                            np.mean(np_loss),
-                            np.mean(np_acc),
+                            np.mean(np_loss), np.mean(np_acc),
                             args.print_steps / used_time
                         ])
                     else:
-                        print(
-                            "%s epoch: %d, step: %d, ave loss: %f, "
-                            "speed: %f steps/s" %
-                            (current_time, epoch_step, steps,
-                             np.mean(np_loss),
-                             args.print_steps / used_time))
-                        ce_info.append([
-                            np.mean(np_loss),
-                            args.print_steps / used_time
-                        ])
+                        print("%s epoch: %d, step: %d, ave loss: %f, "
+                              "speed: %f steps/s" %
+                              (current_time, epoch_step, steps,
+                               np.mean(np_loss), args.print_steps / used_time))
+                        ce_info.append(
+                            [np.mean(np_loss), args.print_steps / used_time])
                     time_begin = time.time()
 
-                if steps % args.save_steps == 0: 
+                if steps % args.save_steps == 0:
                     save_path = "step_" + str(steps)
-                    if args.save_checkpoint: 
-                        save_load_io.save_checkpoint(args, exe, train_prog, save_path)
+                    if args.save_checkpoint:
+                        save_load_io.save_checkpoint(args, exe, train_prog,
+                                                     save_path)
                     if args.save_param:
-                        save_load_io.save_param(args, exe, train_prog, save_path)
-                 
-            except fluid.core.EOFException:  
+                        save_load_io.save_param(args, exe, train_prog,
+                                                save_path)
+
+            except fluid.core.EOFException:
                 data_reader.reset()
                 break
-    if args.save_checkpoint: 
+    if args.save_checkpoint:
         save_load_io.save_checkpoint(args, exe, train_prog, "step_final")
     if args.save_param:
         save_load_io.save_param(args, exe, train_prog, "step_final")
@@ -264,7 +258,7 @@ def do_train(args):
         if cards != '':
             num = len(cards.split(","))
         return num
-    
+
     if args.enable_ce:
         card_num = get_cards()
         print("test_card_num", card_num)
@@ -283,8 +277,8 @@ def do_train(args):
         print("kpis\ttrain_acc_%s_card%s\t%f" % (task_name, card_num, ce_acc))
 
 
-if __name__ == '__main__': 
-    
+if __name__ == '__main__':
+
     args = PDConfig(yaml_file="./data/config/dgu.yaml")
     args.build()
     args.Print()
diff --git a/PaddleNLP/emotion_detection/inference_model.py b/PaddleNLP/emotion_detection/inference_model.py
index 64f4815c35b6e9a2e4fb78be6cf0587a2464f513..0cc4b503c641e9cae7823fc2456c7534f3a21852 100644
--- a/PaddleNLP/emotion_detection/inference_model.py
+++ b/PaddleNLP/emotion_detection/inference_model.py
@@ -19,8 +19,7 @@ from __future__ import print_function
 
 import os
 import sys
-sys.path.append("../")
-
+sys.path.append("../shared_modules/")
 import paddle
 import paddle.fluid as fluid
 import numpy as np
diff --git a/PaddleNLP/emotion_detection/run_classifier.py b/PaddleNLP/emotion_detection/run_classifier.py
index 9e3e9a824dbd458169f4391522d2c4acb046104c..a5dd0ebb8dd711d38d81e55ce31221f2e967fb29 100644
--- a/PaddleNLP/emotion_detection/run_classifier.py
+++ b/PaddleNLP/emotion_detection/run_classifier.py
@@ -11,7 +11,6 @@
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # See the License for the specific language governing permissions and
 # limitations under the License.
-
 """
 Emotion Detection Task
 """
@@ -24,7 +23,7 @@ import os
 import time
 import multiprocessing
 import sys
-sys.path.append("../")
+sys.path.append("../shared_modules/")
 
 import paddle
 import paddle.fluid as fluid
@@ -38,9 +37,7 @@ import reader
 import utils
 
 
-def create_model(args,
-                 num_labels,
-                 is_prediction=False):
+def create_model(args, num_labels, is_prediction=False):
     """
     Create Model for Emotion Detection
     """
@@ -77,10 +74,17 @@ def create_model(args,
         raise ValueError("Unknown network type!")
 
     if is_prediction:
-        probs = network(data, seq_len, None, args.vocab_size, class_dim=num_labels, is_prediction=True)
+        probs = network(
+            data,
+            seq_len,
+            None,
+            args.vocab_size,
+            class_dim=num_labels,
+            is_prediction=True)
         return loader, probs, [data.name, seq_len.name]
 
-    avg_loss, probs = network(data, seq_len, label, args.vocab_size, class_dim=num_labels)
+    avg_loss, probs = network(
+        data, seq_len, label, args.vocab_size, class_dim=num_labels)
     num_seqs = fluid.layers.create_tensor(dtype='int64')
     accuracy = fluid.layers.accuracy(input=probs, label=label, total=num_seqs)
     return loader, avg_loss, accuracy, num_seqs
@@ -142,9 +146,10 @@ def main(args):
     exe = fluid.Executor(place)
 
     task_name = args.task_name.lower()
-    processor = reader.EmoTectProcessor(data_dir=args.data_dir,
-                                      vocab_path=args.vocab_path,
-                                      random_seed=args.random_seed)
+    processor = reader.EmoTectProcessor(
+        data_dir=args.data_dir,
+        vocab_path=args.vocab_path,
+        random_seed=args.random_seed)
     #num_labels = len(processor.get_labels())
     num_labels = args.num_labels
 
@@ -173,9 +178,7 @@ def main(args):
         with fluid.program_guard(train_program, startup_prog):
             with fluid.unique_name.guard():
                 train_loader, loss, accuracy, num_seqs = create_model(
-                    args,
-                    num_labels=num_labels,
-                    is_prediction=False)
+                    args, num_labels=num_labels, is_prediction=False)
 
                 sgd_optimizer = fluid.optimizer.Adagrad(learning_rate=args.lr)
                 sgd_optimizer.minimize(loss)
@@ -189,37 +192,27 @@ def main(args):
     if args.do_val:
         if args.do_train:
             test_data_generator = processor.data_generator(
-                batch_size=args.batch_size,
-                phase='dev',
-                epoch=1)
+                batch_size=args.batch_size, phase='dev', epoch=1)
         else:
             test_data_generator = processor.data_generator(
-                batch_size=args.batch_size,
-                phase='test',
-                epoch=1)
+                batch_size=args.batch_size, phase='test', epoch=1)
 
         test_prog = fluid.Program()
         with fluid.program_guard(test_prog, startup_prog):
             with fluid.unique_name.guard():
                 test_loader, loss, accuracy, num_seqs = create_model(
-                    args,
-                    num_labels=num_labels,
-                    is_prediction=False)
+                    args, num_labels=num_labels, is_prediction=False)
         test_prog = test_prog.clone(for_test=True)
 
     if args.do_infer:
         infer_data_generator = processor.data_generator(
-            batch_size=args.batch_size,
-            phase='infer',
-            epoch=1)
+            batch_size=args.batch_size, phase='infer', epoch=1)
 
         test_prog = fluid.Program()
         with fluid.program_guard(test_prog, startup_prog):
             with fluid.unique_name.guard():
                 infer_loader, probs, _ = create_model(
-                    args,
-                    num_labels=num_labels,
-                    is_prediction=True)
+                    args, num_labels=num_labels, is_prediction=True)
         test_prog = test_prog.clone(for_test=True)
 
     exe.run(startup_prog)
@@ -292,8 +285,9 @@ def main(args):
                     time_begin = time.time()
 
                 if steps % args.save_steps == 0:
-                    save_path = os.path.join(args.save_checkpoint_dir, "step_" + str(steps))
-                    fluid.io.save_persistables(exe, save_path, train_program)
+                    save_path = os.path.join(args.save_checkpoint_dir,
+                                             "step_" + str(steps))
+                    fluid.save(train_program, save_path)
 
                 if steps % args.validation_steps == 0:
                     # evaluate on dev set
@@ -306,11 +300,11 @@ def main(args):
                 print("final step: %d " % steps)
                 if args.do_val:
                     evaluate(test_exe, test_prog, test_loader,
-                        [loss.name, accuracy.name, num_seqs.name],
-                        "dev")
+                             [loss.name, accuracy.name, num_seqs.name], "dev")
 
-                save_path = os.path.join(args.save_checkpoint_dir, "step_" + str(steps))
-                fluid.io.save_persistables(exe, save_path, train_program)
+                save_path = os.path.join(args.save_checkpoint_dir,
+                                         "step_" + str(steps))
+                fluid.save(train_program, save_path)
                 train_loader.reset()
                 break
 
@@ -334,15 +328,12 @@ def main(args):
     if not args.do_train and args.do_val:
         print("Final test result:")
         evaluate(test_exe, test_prog, test_loader,
-                 [loss.name, accuracy.name, num_seqs.name],
-                 "test")
+                 [loss.name, accuracy.name, num_seqs.name], "test")
 
     # infer
     if args.do_infer:
         print("Final infer result:")
-        infer(test_exe, test_prog, infer_loader,
-             [probs.name],
-             "infer")
+        infer(test_exe, test_prog, infer_loader, [probs.name], "infer")
 
 
 def get_cards():
diff --git a/PaddleNLP/emotion_detection/run_ernie_classifier.py b/PaddleNLP/emotion_detection/run_ernie_classifier.py
index 2acdaa532fec2f51066975427d283256cb20b1c2..0f929292452c56404199e77962dec0d77f64f12d 100644
--- a/PaddleNLP/emotion_detection/run_ernie_classifier.py
+++ b/PaddleNLP/emotion_detection/run_ernie_classifier.py
@@ -11,7 +11,6 @@
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # See the License for the specific language governing permissions and
 # limitations under the License.
-
 """
 Emotion Detection Task, based on ERNIE
 """
@@ -25,7 +24,7 @@ import time
 import argparse
 import multiprocessing
 import sys
-sys.path.append("../")
+sys.path.append("../shared_modules/")
 
 import paddle
 import paddle.fluid as fluid
@@ -350,7 +349,7 @@ def main(args):
 
                 if steps % args.save_steps == 0:
                     save_path = os.path.join(args.save_checkpoint_dir, "step_" + str(steps))
-                    fluid.io.save_persistables(exe, save_path, train_program)
+                    fluid.save(train_program, save_path)
 
                 if steps % args.validation_steps == 0:
                     # evaluate dev set
@@ -369,7 +368,7 @@ def main(args):
 
             except fluid.core.EOFException:
                 save_path = os.path.join(args.save_checkpoint_dir, "step_" + str(steps))
-                fluid.io.save_persistables(exe, save_path, train_program)
+                fluid.save(train_program, save_path)
                 train_pyreader.reset()
                 break
 
diff --git a/PaddleNLP/emotion_detection/utils.py b/PaddleNLP/emotion_detection/utils.py
index 2477f98a7413445f4e1b67153ffe8c1e8ebe362c..de7ac1eeb8f5212d8383fd6bcb8b42d8bd4957e2 100644
--- a/PaddleNLP/emotion_detection/utils.py
+++ b/PaddleNLP/emotion_detection/utils.py
@@ -11,7 +11,6 @@
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # See the License for the specific language governing permissions and
 # limitations under the License.
-
 """
 EmoTect utilities.
 """
@@ -29,27 +28,13 @@ import paddle
 import paddle.fluid as fluid
 import numpy as np
 
+
 def init_checkpoint(exe, init_checkpoint_path, main_program):
     """
     Init CheckPoint
     """
-    assert os.path.exists(
-        init_checkpoint_path), "[%s] cann't be found." % init_checkpoint_path
-
-    def existed_persitables(var):
-        """
-        If existed presitabels
-        """
-        if not fluid.io.is_persistable(var):
-            return False
-        return os.path.exists(os.path.join(init_checkpoint_path, var.name))
 
-    fluid.io.load_vars(
-        exe,
-        init_checkpoint_path,
-        main_program=main_program,
-        predicate=existed_persitables)
-    print("Load model from {}".format(init_checkpoint_path))
+    fluid.load(main_program, init_checkpoint_path, exe)
 
 
 def word2id(word_dict, query):
@@ -57,8 +42,10 @@ def word2id(word_dict, query):
     Convert word sequence into id list
     """
     unk_id = len(word_dict)
-    wids = [word_dict[w] if w in word_dict else unk_id
-            for w in query.strip().split(" ")]
+    wids = [
+        word_dict[w] if w in word_dict else unk_id
+        for w in query.strip().split(" ")
+    ]
     return wids
 
 
diff --git a/PaddleNLP/language_model/README.md b/PaddleNLP/language_model/README.md
index 10b882a7aaa94f6612c01566041b9d228dd86a51..8bf919a341c7b95eb58827c46c2111439164fcf1 100644
--- a/PaddleNLP/language_model/README.md
+++ b/PaddleNLP/language_model/README.md
@@ -5,7 +5,7 @@
 ## 1. 任务说明
 本文主要介绍基于lstm的语言的模型的实现,给定一个输入词序列(中文分词、英文tokenize),计算其ppl(语言模型困惑度,用户表示句子的流利程度),基于循环神经网络语言模型的介绍可以[参阅论文](https://arxiv.org/abs/1409.2329)。相对于传统的方法,基于循环神经网络的方法能够更好的解决稀疏词的问题。
 
-**目前语言模型要求使用PaddlePaddle 1.6及以上版本或适当的develop版本。**
+**目前语言模型要求使用PaddlePaddle 1.7及以上版本或适当的develop版本。**
 
 同时推荐用户参考[IPython Notebook demo](https://aistudio.baidu.com/aistudio/projectDetail/122290)
 
diff --git a/PaddleNLP/language_model/train.py b/PaddleNLP/language_model/train.py
index f00604f3aa740eaa7c740ca82282ec0eae180916..9a5af4b7be9869d0e2a09d9959f82947279900b1 100644
--- a/PaddleNLP/language_model/train.py
+++ b/PaddleNLP/language_model/train.py
@@ -36,7 +36,7 @@ import sys
 if sys.version[0] == '2':
     reload(sys)
     sys.setdefaultencoding("utf-8")
-sys.path.append('../')
+sys.path.append('../shared_modules/')
 import os
 os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3"
 
@@ -60,7 +60,7 @@ def profile_context(profile=True, profiler_path='/tmp/paddingrnn.profile'):
 
 
 def get_current_model_para(train_prog, train_exe):
-    param_list = train_prog.block(0).all_parameters()
+    param_list = train_prog.all_parameters()
     param_name_list = [p.name for p in param_list]
 
     vals = {}
@@ -73,7 +73,7 @@ def get_current_model_para(train_prog, train_exe):
 
 def save_para_npz(train_prog, train_exe):
     print("begin to save model to model_base")
-    param_list = train_prog.block(0).all_parameters()
+    param_list = train_prog.all_parameters()
     param_name_list = [p.name for p in param_list]
 
     vals = {}
diff --git a/PaddleNLP/lexical_analysis/README.md b/PaddleNLP/lexical_analysis/README.md
index e1ed54d1267a850817cf96bdba54980b405fb11c..97ab9fcd1a244b5fb49c57ccf4a7f5232a9481b9 100644
--- a/PaddleNLP/lexical_analysis/README.md
+++ b/PaddleNLP/lexical_analysis/README.md
@@ -16,7 +16,7 @@ Lexical Analysis of Chinese,简称 LAC,是一个联合的词法分析模型
 
 #### 1.PaddlePaddle 安装
 
-本项目依赖 PaddlePaddle 1.6.0 及以上版本和PaddleHub 1.0.0及以上版本 ,PaddlePaddle安装请参考官网 [快速安装](http://www.paddlepaddle.org/paddle#quick-start),PaddleHub安装参考 [PaddleHub](https://github.com/PaddlePaddle/PaddleHub)。
+本项目依赖 PaddlePaddle 1.7 及以上版本和PaddleHub 1.0.0及以上版本 ,PaddlePaddle安装请参考官网 [快速安装](http://www.paddlepaddle.org/paddle#quick-start),PaddleHub安装参考 [PaddleHub](https://github.com/PaddlePaddle/PaddleHub)。
 
 > Warning: GPU 和 CPU 版本的 PaddlePaddle 分别是 paddlepaddle-gpu 和 paddlepaddle,请安装时注意区别。
 
diff --git a/PaddleNLP/lexical_analysis/creator.py b/PaddleNLP/lexical_analysis/creator.py
index e4e1fc9406a7f1beb50aff10848c90a29fbb55fc..bc8e492f1b66e4474b626aa3dbfa767889f73e1d 100644
--- a/PaddleNLP/lexical_analysis/creator.py
+++ b/PaddleNLP/lexical_analysis/creator.py
@@ -26,7 +26,7 @@ from paddle.fluid.initializer import NormalInitializer
 from reader import Dataset
 from ernie_reader import SequenceLabelReader
 
-sys.path.append("..")
+sys.path.append("../shared_modules/")
 from models.sequence_labeling import nets
 from models.representation.ernie import ernie_encoder, ernie_pyreader
 
@@ -35,9 +35,10 @@ def create_model(args, vocab_size, num_labels, mode='train'):
     """create lac model"""
 
     # model's input data
-    words = fluid.data(name='words', shape=[-1, 1], dtype='int64', lod_level=1)
+    words = fluid.data(
+        name='words', shape=[None, 1], dtype='int64', lod_level=1)
     targets = fluid.data(
-        name='targets', shape=[-1, 1], dtype='int64', lod_level=1)
+        name='targets', shape=[None, 1], dtype='int64', lod_level=1)
 
     # for inference process
     if mode == 'infer':
@@ -88,9 +89,11 @@ def create_pyreader(args,
                     return_reader=False,
                     mode='train'):
     # init reader
+    device_count = len(fluid.cuda_places()) if args.use_cuda else len(
+        fluid.cpu_places())
 
     if model == 'lac':
-        pyreader = fluid.io.PyReader(
+        pyreader = fluid.io.DataLoader.from_generator(
             feed_list=feed_list,
             capacity=50,
             use_double_buffer=True,
@@ -101,19 +104,19 @@ def create_pyreader(args,
 
         # create lac pyreader
         if mode == 'train':
-            pyreader.decorate_sample_list_generator(
+            pyreader.set_sample_list_generator(
                 fluid.io.batch(
                     fluid.io.shuffle(
                         reader.file_reader(file_name),
                         buf_size=args.traindata_shuffle_buffer),
-                    batch_size=args.batch_size),
+                    batch_size=args.batch_size / device_count),
                 places=place)
         else:
-            pyreader.decorate_sample_list_generator(
+            pyreader.set_sample_list_generator(
                 fluid.io.batch(
                     reader.file_reader(
                         file_name, mode=mode),
-                    batch_size=args.batch_size),
+                    batch_size=args.batch_size / device_count),
                 places=place)
 
     elif model == 'ernie':
@@ -162,19 +165,19 @@ def create_ernie_model(args, ernie_config):
     # ERNIE's input data
 
     src_ids = fluid.data(
-        name='src_ids', shape=[-1, args.max_seq_len, 1], dtype='int64')
+        name='src_ids', shape=[None, args.max_seq_len, 1], dtype='int64')
     sent_ids = fluid.data(
-        name='sent_ids', shape=[-1, args.max_seq_len, 1], dtype='int64')
+        name='sent_ids', shape=[None, args.max_seq_len, 1], dtype='int64')
     pos_ids = fluid.data(
-        name='pos_ids', shape=[-1, args.max_seq_len, 1], dtype='int64')
+        name='pos_ids', shape=[None, args.max_seq_len, 1], dtype='int64')
     input_mask = fluid.data(
-        name='input_mask', shape=[-1, args.max_seq_len, 1], dtype='float32')
+        name='input_mask', shape=[None, args.max_seq_len, 1], dtype='float32')
 
     padded_labels = fluid.data(
-        name='padded_labels', shape=[-1, args.max_seq_len, 1], dtype='int64')
+        name='padded_labels', shape=[None, args.max_seq_len, 1], dtype='int64')
 
     seq_lens = fluid.data(
-        name='seq_lens', shape=[-1], dtype='int64', lod_level=0)
+        name='seq_lens', shape=[None], dtype='int64', lod_level=0)
 
     squeeze_labels = fluid.layers.squeeze(padded_labels, axes=[-1])
 
diff --git a/PaddleNLP/lexical_analysis/ernie_reader.py b/PaddleNLP/lexical_analysis/ernie_reader.py
index 52a2dd533c7467043b718a630f951ca33a6e9402..53e74f72285130ad2fbd82fd3ee286c9c799a149 100644
--- a/PaddleNLP/lexical_analysis/ernie_reader.py
+++ b/PaddleNLP/lexical_analysis/ernie_reader.py
@@ -20,7 +20,7 @@ import sys
 from collections import namedtuple
 import numpy as np
 
-sys.path.append("..")
+sys.path.append("../shared_modules/")
 from preprocess.ernie.task_reader import BaseReader, tokenization
 
 
diff --git a/PaddleNLP/lexical_analysis/eval.py b/PaddleNLP/lexical_analysis/eval.py
index 3b96d0c7e1258313709d69bd1e1ba02b25ccb33f..c3e28b1314ba3f55555e2d97ea2b5f488dd6d422 100644
--- a/PaddleNLP/lexical_analysis/eval.py
+++ b/PaddleNLP/lexical_analysis/eval.py
@@ -24,7 +24,7 @@ import paddle
 import utils
 import reader
 import creator
-sys.path.append('../models/')
+sys.path.append('../shared_modules/models/')
 from model_check import check_cuda
 from model_check import check_version
 
diff --git a/PaddleNLP/lexical_analysis/inference_model.py b/PaddleNLP/lexical_analysis/inference_model.py
index 89075723fe9477d0c153c82b1d87d7377c3f7258..29d078adcbaae80f43f761de83f83df481c5b019 100644
--- a/PaddleNLP/lexical_analysis/inference_model.py
+++ b/PaddleNLP/lexical_analysis/inference_model.py
@@ -10,7 +10,7 @@ import paddle.fluid as fluid
 import creator
 import reader
 import utils
-sys.path.append('../models/')
+sys.path.append('../shared_modules/models/')
 from model_check import check_cuda
 from model_check import check_version
 
diff --git a/PaddleNLP/lexical_analysis/predict.py b/PaddleNLP/lexical_analysis/predict.py
index d3ed22ac7cb9f4f45d1cb1350f13031c62b54d4c..3b2d2597d0cec875a854362277a47ba13651f874 100644
--- a/PaddleNLP/lexical_analysis/predict.py
+++ b/PaddleNLP/lexical_analysis/predict.py
@@ -24,7 +24,7 @@ import paddle
 import utils
 import reader
 import creator
-sys.path.append('../models/')
+sys.path.append('../shared_modules/models/')
 from model_check import check_cuda
 from model_check import check_version
 
diff --git a/PaddleNLP/lexical_analysis/run_ernie_sequence_labeling.py b/PaddleNLP/lexical_analysis/run_ernie_sequence_labeling.py
index 3ebed4c3a358039e5d9c63ae4ac41231fac8dbbb..d716a8132784cf42cb50ce53f980a4c57c0ec785 100644
--- a/PaddleNLP/lexical_analysis/run_ernie_sequence_labeling.py
+++ b/PaddleNLP/lexical_analysis/run_ernie_sequence_labeling.py
@@ -34,7 +34,7 @@ import paddle.fluid as fluid
 
 import creator
 import utils
-sys.path.append("..")
+sys.path.append("../shared_modules/")
 from models.representation.ernie import ErnieConfig
 from models.model_check import check_cuda
 from models.model_check import check_version
@@ -188,15 +188,16 @@ def do_train(args):
 
             if steps % args.save_steps == 0:
                 save_path = os.path.join(args.model_save_dir,
-                                         "step_" + str(steps))
+                                         "step_" + str(steps), "checkpoint")
                 print("\tsaving model as %s" % (save_path))
-                fluid.io.save_persistables(exe, save_path, train_program)
+                fluid.save(train_program, save_path)
 
             if steps % args.validation_steps == 0:
                 evaluate(exe, test_program, test_pyreader, train_ret)
 
-    save_path = os.path.join(args.model_save_dir, "step_" + str(steps))
-    fluid.io.save_persistables(exe, save_path, train_program)
+    save_path = os.path.join(args.model_save_dir, "step_" + str(steps),
+                             "checkpoint")
+    fluid.save(train_program, save_path)
 
 
 def do_eval(args):
diff --git a/PaddleNLP/lexical_analysis/train.py b/PaddleNLP/lexical_analysis/train.py
index cbd47e3e8e63e3be7ed0be8f7db84219eb43fe6f..aa1db213168b422d2e288471243b01e8440b6432 100644
--- a/PaddleNLP/lexical_analysis/train.py
+++ b/PaddleNLP/lexical_analysis/train.py
@@ -29,7 +29,7 @@ import reader
 import utils
 import creator
 from eval import test_process
-sys.path.append('../models/')
+sys.path.append('../shared_modules/models/')
 from model_check import check_cuda
 from model_check import check_version
 
@@ -151,8 +151,8 @@ def do_train(args):
             # save checkpoints
             if step % args.save_steps == 0 and step != 0:
                 save_path = os.path.join(args.model_save_dir,
-                                         "step_" + str(step))
-                fluid.io.save_persistables(exe, save_path, train_program)
+                                         "step_" + str(step), "checkpoint")
+                fluid.save(train_program, save_path)
             step += 1
 
     if args.enable_ce:
diff --git a/PaddleNLP/lexical_analysis/utils.py b/PaddleNLP/lexical_analysis/utils.py
index d3ee614de23ad896eb826bbf86239bba5860a77d..6b124f8df836635aa454d2f658851134215e4f4e 100644
--- a/PaddleNLP/lexical_analysis/utils.py
+++ b/PaddleNLP/lexical_analysis/utils.py
@@ -200,19 +200,11 @@ def init_checkpoint(exe, init_checkpoint_path, main_program):
     assert os.path.exists(
         init_checkpoint_path), "[%s] cann't be found." % init_checkpoint_path
 
-    def existed_persitables(var):
-        """
-        If existed presitabels
-        """
-        if not fluid.io.is_persistable(var):
-            return False
-        return os.path.exists(os.path.join(init_checkpoint_path, var.name))
-
-    fluid.io.load_vars(
-        exe,
-        init_checkpoint_path,
-        main_program=main_program,
-        predicate=existed_persitables)
+    try:
+        checkpoint_path = os.path.join(init_checkpoint_path, "checkpoint")
+        fluid.load(main_program, checkpoint_path, exe)
+    except:
+        fluid.load(main_program, init_checkpoint_path, exe)
     print("Load model from {}".format(init_checkpoint_path))
 
 
@@ -224,15 +216,6 @@ def init_pretraining_params(exe,
     assert os.path.exists(pretraining_params_path
                           ), "[%s] cann't be found." % pretraining_params_path
 
-    def _existed_params(var):
-        if not isinstance(var, fluid.framework.Parameter):
-            return False
-        return os.path.exists(os.path.join(pretraining_params_path, var.name))
-
-    fluid.io.load_vars(
-        exe,
-        pretraining_params_path,
-        main_program=main_program,
-        predicate=_existed_params)
+    fluid.load(main_program, pretraining_params_path, exe)
     print("Load pretraining parameters from {}.".format(
         pretraining_params_path))
diff --git a/PaddleNLP/PaddleMRC/README.md b/PaddleNLP/machine_reading_comprehension/README.md
similarity index 97%
rename from PaddleNLP/PaddleMRC/README.md
rename to PaddleNLP/machine_reading_comprehension/README.md
index cf565b5095220229b71bae9e8efe7b6ea119e13d..ef4d89c9b13bf3eb115e1bda59d0f330fd59209a 100644
--- a/PaddleNLP/PaddleMRC/README.md
+++ b/PaddleNLP/machine_reading_comprehension/README.md
@@ -14,12 +14,12 @@ DuReader是一个大规模、面向真实应用、由人类生成的中文阅读
  - 答案由人类生成
  - 面向真实应用场景
  - 标注更加丰富细致
- 
+
 更多关于DuReader数据集的详细信息可在[DuReader官网](https://ai.baidu.com//broad/subordinate?dataset=dureader)找到。
 
 ### DuReader基线系统
 
-DuReader基线系统利用[PaddlePaddle](http://paddlepaddle.org)深度学习框架,针对**DuReader阅读理解数据集**实现并升级了一个经典的阅读理解模型 —— BiDAF. 
+DuReader基线系统利用[PaddlePaddle](http://paddlepaddle.org)深度学习框架,针对**DuReader阅读理解数据集**实现并升级了一个经典的阅读理解模型 —— BiDAF.
 
 
 ## [KT-Net](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/Research/ACL2019-KTNET)
@@ -30,7 +30,7 @@ KT-NET是百度NLP提出的具有开创性意义的语言表示与知识表示
  - 被ACL 2019录用为长文 ([文章链接](https://www.aclweb.org/anthology/P19-1226/))
 
 此外,KT-NET具备很强的通用性,不仅适用于机器阅读理解任务,对其他形式的语言理解任务,如自然语言推断、复述识别、语义相似度判断等均有帮助。
- 
+
 
 ## [D-NET](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/Research/MRQA2019-D-NET)
 D-NET是一个以提升**阅读理解模型泛化能力**为目标的“预训练-微调”框架。D-NET的特点包括:
@@ -39,4 +39,3 @@ D-NET是一个以提升**阅读理解模型泛化能力**为目标的“预训
 - 在微调阶段引入多任务、多领域的学习策略 (基于[PALM](https://github.com/PaddlePaddle/PALM)多任务学习框架),有效的提升了模型在不同领域的泛化能力
 
 百度利用D-NET框架在EMNLP 2019 [MRQA](https://mrqa.github.io/shared)国际阅读理解评测中以超过第二名近两个百分点的成绩夺得冠军,同时,在全部12个测试数据集中的10个排名第一。
-
diff --git a/PaddleNLP/PaddleMT/transformer/.run_ce.sh b/PaddleNLP/machine_translation/transformer/.run_ce.sh
similarity index 100%
rename from PaddleNLP/PaddleMT/transformer/.run_ce.sh
rename to PaddleNLP/machine_translation/transformer/.run_ce.sh
diff --git a/PaddleNLP/PaddleMT/transformer/README.md b/PaddleNLP/machine_translation/transformer/README.md
similarity index 95%
rename from PaddleNLP/PaddleMT/transformer/README.md
rename to PaddleNLP/machine_translation/transformer/README.md
index 90d47f53cc4566bc5428d95e24ee43641e27c90b..1e05dc22050e9132eeac1eef72988f59cc6081e9 100644
--- a/PaddleNLP/PaddleMT/transformer/README.md
+++ b/PaddleNLP/machine_translation/transformer/README.md
@@ -106,7 +106,7 @@ python -u main.py \
   --prepostprocess_dropout 0.3
 ```
 
-训练时默认使用所有 GPU,可以通过 `CUDA_VISIBLE_DEVICES` 环境变量来设置使用的 GPU 数目。也可以只使用 CPU 训练(通过参数 `--use_cuda False` 设置),训练速度相对较慢。在执行训练时若提供了 `save_param` 和 `save_checkpoint`(默认为 trained_params 和 trained_ckpts),则每隔一定 iteration 后(通过参数 `save_step` 设置,默认为10000)将分别保存当前训练的参数值和 checkpoint 到相应目录,每隔一定数目的 iteration (通过参数 `print_step` 设置,默认为100)将打印如下的日志到标准输出:
+训练时默认使用所有 GPU,可以通过 `CUDA_VISIBLE_DEVICES` 环境变量来设置使用的 GPU 数目。也可以只使用 CPU 训练(通过参数 `--use_cuda False` 设置),训练速度相对较慢。在执行训练时若提供了 `save_model_path`(默认为 saved_models),则每隔一定 iteration 后(通过参数 `save_step` 设置,默认为10000)将保存当前训练的 checkpoint 到相应目录(会保存分别记录了模型参数和优化器状态的 `transformer.pdparams` 和 `transformer.pdopt` 两个文件),每隔一定数目的 iteration (通过参数 `print_step` 设置,默认为100)将打印如下的日志到标准输出:
 
 ```txt
 [2019-08-02 15:30:51,656 INFO train.py:262] step_idx: 150100, epoch: 32, batch: 1364, avg loss: 2.880427, normalized loss: 1.504687, ppl: 17.821888, speed: 3.34 step/s
@@ -195,7 +195,7 @@ BLEU = 26.35, 57.7/32.1/20.0/13.0 (BP=1.000, ratio=1.013, hyp_len=63903, ref_len
 
 ### 预训练模型
 
-我们这里提供了对应有以上 BLEU 值的 [base model](https://transformer-res.bj.bcebos.com/base_model_params.tar.gz) 和 [big model](https://transformer-res.bj.bcebos.com/big_model_params.tar.gz) 的模型参数提供下载使用(注意,模型使用了提供下载的数据进行训练和测试)。
+我们这里提供了对应有以上 BLEU 值的 [base model](https://transformer-res.bj.bcebos.com/base_model_graph.tar.gz) 和 [big model](https://transformer-res.bj.bcebos.com/big_model_graph.tar.gz) 的模型参数提供下载使用(注意,模型使用了提供下载的数据进行训练和测试)。
 
 ## 进阶使用
 
diff --git a/PaddleNLP/PaddleLARK/BERT/__init__.py b/PaddleNLP/machine_translation/transformer/__init__.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/BERT/__init__.py
rename to PaddleNLP/machine_translation/transformer/__init__.py
diff --git a/PaddleNLP/PaddleMT/transformer/_ce.py b/PaddleNLP/machine_translation/transformer/_ce.py
similarity index 100%
rename from PaddleNLP/PaddleMT/transformer/_ce.py
rename to PaddleNLP/machine_translation/transformer/_ce.py
diff --git a/PaddleNLP/PaddleMT/transformer/desc.py b/PaddleNLP/machine_translation/transformer/desc.py
similarity index 92%
rename from PaddleNLP/PaddleMT/transformer/desc.py
rename to PaddleNLP/machine_translation/transformer/desc.py
index f6fa768adc42ebcebe36eb60b52f7f6e366b3887..3eadd8433a79775b46047a0e74cb7480cbf96657 100644
--- a/PaddleNLP/PaddleMT/transformer/desc.py
+++ b/PaddleNLP/machine_translation/transformer/desc.py
@@ -12,6 +12,7 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
+
 def get_input_descs(args):
     """
     Generate a dict mapping data fields to the corresponding data shapes and
@@ -42,11 +43,12 @@ def get_input_descs(args):
         # encoder.
         # The actual data shape of src_slf_attn_bias is:
         # [batch_size, n_head, max_src_len_in_batch, max_src_len_in_batch]
-        "src_slf_attn_bias": [(batch_size, n_head, seq_len, seq_len), "float32"],
+        "src_slf_attn_bias":
+        [(batch_size, n_head, seq_len, seq_len), "float32"],
         # The actual data shape of trg_word is:
         # [batch_size, max_trg_len_in_batch, 1]
         "trg_word": [(batch_size, seq_len), "int64",
-                    2],  # lod_level is only used in fast decoder.
+                     2],  # lod_level is only used in fast decoder.
         # The actual data shape of trg_pos is:
         # [batch_size, max_trg_len_in_batch, 1]
         "trg_pos": [(batch_size, seq_len), "int64"],
@@ -54,12 +56,14 @@ def get_input_descs(args):
         # subsequent words in the decoder.
         # The actual data shape of trg_slf_attn_bias is:
         # [batch_size, n_head, max_trg_len_in_batch, max_trg_len_in_batch]
-        "trg_slf_attn_bias": [(batch_size, n_head, seq_len, seq_len), "float32"],
+        "trg_slf_attn_bias":
+        [(batch_size, n_head, seq_len, seq_len), "float32"],
         # This input is used to remove attention weights on paddings of the source
         # input in the encoder-decoder attention.
         # The actual data shape of trg_src_attn_bias is:
         # [batch_size, n_head, max_trg_len_in_batch, max_src_len_in_batch]
-        "trg_src_attn_bias": [(batch_size, n_head, seq_len, seq_len), "float32"],
+        "trg_src_attn_bias":
+        [(batch_size, n_head, seq_len, seq_len), "float32"],
         # This input is used in independent decoder program for inference.
         # The actual data shape of enc_output is:
         # [batch_size, max_src_len_in_batch, d_model]
@@ -80,6 +84,7 @@ def get_input_descs(args):
 
     return input_descs
 
+
 # Names of word embedding table which might be reused for weight sharing.
 word_emb_param_names = (
     "src_word_emb_table",
diff --git a/PaddleNLP/PaddleMT/transformer/gen_data.sh b/PaddleNLP/machine_translation/transformer/gen_data.sh
similarity index 100%
rename from PaddleNLP/PaddleMT/transformer/gen_data.sh
rename to PaddleNLP/machine_translation/transformer/gen_data.sh
diff --git a/PaddleNLP/PaddleMT/transformer/images/multi_head_attention.png b/PaddleNLP/machine_translation/transformer/images/multi_head_attention.png
similarity index 100%
rename from PaddleNLP/PaddleMT/transformer/images/multi_head_attention.png
rename to PaddleNLP/machine_translation/transformer/images/multi_head_attention.png
diff --git a/PaddleNLP/PaddleMT/transformer/images/transformer_network.png b/PaddleNLP/machine_translation/transformer/images/transformer_network.png
similarity index 100%
rename from PaddleNLP/PaddleMT/transformer/images/transformer_network.png
rename to PaddleNLP/machine_translation/transformer/images/transformer_network.png
diff --git a/PaddleNLP/PaddleMT/transformer/inference_model.py b/PaddleNLP/machine_translation/transformer/inference_model.py
similarity index 66%
rename from PaddleNLP/PaddleMT/transformer/inference_model.py
rename to PaddleNLP/machine_translation/transformer/inference_model.py
index 9ca71777e2b90ef03215527a50998b87d81f6cbc..baf6a7325a661b10407280c1e222c730f5e0087c 100644
--- a/PaddleNLP/PaddleMT/transformer/inference_model.py
+++ b/PaddleNLP/machine_translation/transformer/inference_model.py
@@ -24,6 +24,7 @@ import paddle.fluid as fluid
 
 from utils.input_field import InputField
 from utils.configure import PDConfig
+from utils.load import load
 
 # include task-specific libs
 import desc
@@ -31,51 +32,6 @@ import reader
 from transformer import create_net
 
 
-def init_from_pretrain_model(args, exe, program):
-
-    assert isinstance(args.init_from_pretrain_model, str)
-
-    if not os.path.exists(args.init_from_pretrain_model):
-        raise Warning("The pretrained params do not exist.")
-        return False
-
-    def existed_params(var):
-        if not isinstance(var, fluid.framework.Parameter):
-            return False
-        return os.path.exists(
-            os.path.join(args.init_from_pretrain_model, var.name))
-
-    fluid.io.load_vars(
-        exe,
-        args.init_from_pretrain_model,
-        main_program=program,
-        predicate=existed_params)
-
-    print("finish initing model from pretrained params from %s" %
-          (args.init_from_pretrain_model))
-
-    return True
-
-
-def init_from_params(args, exe, program):
-
-    assert isinstance(args.init_from_params, str)
-
-    if not os.path.exists(args.init_from_params):
-        raise Warning("the params path does not exist.")
-        return False
-
-    fluid.io.load_params(
-        executor=exe,
-        dirname=args.init_from_params,
-        main_program=program,
-        filename="params.pdparams")
-
-    print("finish init model from params from %s" % (args.init_from_params))
-
-    return True
-
-
 def do_save_inference_model(args):
     if args.use_cuda:
         dev_count = fluid.core.get_cuda_device_count()
@@ -84,6 +40,11 @@ def do_save_inference_model(args):
         dev_count = int(os.environ.get('CPU_NUM', 1))
         place = fluid.CPUPlace()
 
+    src_vocab = reader.DataProcessor.load_dict(args.src_vocab_fpath)
+    trg_vocab = reader.DataProcessor.load_dict(args.trg_vocab_fpath)
+    args.src_vocab_size = len(src_vocab)
+    args.trg_vocab_size = len(trg_vocab)
+
     test_prog = fluid.default_main_program()
     startup_prog = fluid.default_startup_program()
 
@@ -119,13 +80,10 @@ def do_save_inference_model(args):
     exe = fluid.Executor(place)
 
     exe.run(startup_prog)
-    assert (args.init_from_params) or (args.init_from_pretrain_model)
-
-    if args.init_from_params:
-        init_from_params(args, exe, test_prog)
-
-    elif args.init_from_pretrain_model:
-        init_from_pretrain_model(args, exe, test_prog)
+    assert (
+        args.init_from_params), "must set init_from_params to load parameters"
+    load(test_prog, os.path.join(args.init_from_params, "transformer"), exe)
+    print("finish initing model from params from %s" % (args.init_from_params))
 
     # saving inference model
 
diff --git a/PaddleNLP/PaddleMT/transformer/main.py b/PaddleNLP/machine_translation/transformer/main.py
similarity index 97%
rename from PaddleNLP/PaddleMT/transformer/main.py
rename to PaddleNLP/machine_translation/transformer/main.py
index feaf29baeb386b7843651ff9fc4197861d702c66..0f2412bffea2956a953a0683617d25a7cfc0d402 100644
--- a/PaddleNLP/PaddleMT/transformer/main.py
+++ b/PaddleNLP/machine_translation/transformer/main.py
@@ -25,7 +25,6 @@ from train import do_train
 from predict import do_predict
 from inference_model import do_save_inference_model
 
-
 if __name__ == "__main__":
     LOG_FORMAT = "[%(asctime)s %(levelname)s %(filename)s:%(lineno)d] %(message)s"
     logging.basicConfig(
@@ -43,4 +42,4 @@ if __name__ == "__main__":
         do_predict(args)
 
     if args.do_save_inference_model:
-        do_save_inference_model(args)
\ No newline at end of file
+        do_save_inference_model(args)
diff --git a/PaddleNLP/PaddleMT/transformer/predict.py b/PaddleNLP/machine_translation/transformer/predict.py
similarity index 82%
rename from PaddleNLP/PaddleMT/transformer/predict.py
rename to PaddleNLP/machine_translation/transformer/predict.py
index 2ad93e5838d6a87c1aa9deb8e35da7f071aec51d..179e39f6efdb3d78cafb87a97f6e0d9de346dac5 100644
--- a/PaddleNLP/PaddleMT/transformer/predict.py
+++ b/PaddleNLP/machine_translation/transformer/predict.py
@@ -25,6 +25,7 @@ import paddle.fluid as fluid
 from utils.input_field import InputField
 from utils.configure import PDConfig
 from utils.check import check_gpu, check_version
+from utils.load import load
 
 # include task-specific libs
 import desc
@@ -32,51 +33,6 @@ import reader
 from transformer import create_net, position_encoding_init
 
 
-def init_from_pretrain_model(args, exe, program):
-
-    assert isinstance(args.init_from_pretrain_model, str)
-
-    if not os.path.exists(args.init_from_pretrain_model):
-        raise Warning("The pretrained params do not exist.")
-        return False
-
-    def existed_params(var):
-        if not isinstance(var, fluid.framework.Parameter):
-            return False
-        return os.path.exists(
-            os.path.join(args.init_from_pretrain_model, var.name))
-
-    fluid.io.load_vars(
-        exe,
-        args.init_from_pretrain_model,
-        main_program=program,
-        predicate=existed_params)
-
-    print("finish initing model from pretrained params from %s" %
-          (args.init_from_pretrain_model))
-
-    return True
-
-
-def init_from_params(args, exe, program):
-
-    assert isinstance(args.init_from_params, str)
-
-    if not os.path.exists(args.init_from_params):
-        raise Warning("the params path does not exist.")
-        return False
-
-    fluid.io.load_params(
-        executor=exe,
-        dirname=args.init_from_params,
-        main_program=program,
-        filename="params.pdparams")
-
-    print("finish init model from params from %s" % (args.init_from_params))
-
-    return True
-
-
 def post_process_seq(seq, bos_idx, eos_idx, output_bos=False, output_eos=False):
     """
     Post-process the beam-search decoded sequence. Truncate from the first
@@ -160,13 +116,10 @@ def do_predict(args):
     exe = fluid.Executor(place)
 
     exe.run(startup_prog)
-    assert (args.init_from_params) or (args.init_from_pretrain_model)
-
-    if args.init_from_params:
-        init_from_params(args, exe, test_prog)
-
-    elif args.init_from_pretrain_model:
-        init_from_pretrain_model(args, exe, test_prog)
+    assert (
+        args.init_from_params), "must set init_from_params to load parameters"
+    load(test_prog, os.path.join(args.init_from_params, "transformer"), exe)
+    print("finish initing model from params from %s" % (args.init_from_params))
 
     # to avoid a longer length than training, reset the size of position encoding to max_length
     for pos_enc_param_name in desc.pos_enc_param_names:
diff --git a/PaddleNLP/PaddleMT/transformer/reader.py b/PaddleNLP/machine_translation/transformer/reader.py
similarity index 100%
rename from PaddleNLP/PaddleMT/transformer/reader.py
rename to PaddleNLP/machine_translation/transformer/reader.py
diff --git a/PaddleNLP/PaddleMT/transformer/train.py b/PaddleNLP/machine_translation/transformer/train.py
similarity index 75%
rename from PaddleNLP/PaddleMT/transformer/train.py
rename to PaddleNLP/machine_translation/transformer/train.py
index c9fb5d7220c325477d6a0e5984f11e4e9b85f79a..7ae344e63105240145335d04ca97cb289ed974cb 100644
--- a/PaddleNLP/PaddleMT/transformer/train.py
+++ b/PaddleNLP/machine_translation/transformer/train.py
@@ -27,6 +27,7 @@ import utils.dist_utils as dist_utils
 from utils.input_field import InputField
 from utils.configure import PDConfig
 from utils.check import check_gpu, check_version
+from utils.load import load
 
 # include task-specific libs
 import desc
@@ -39,91 +40,6 @@ if os.environ.get('FLAGS_eager_delete_tensor_gb', None) is None:
 num_trainers = int(os.environ.get('PADDLE_TRAINERS_NUM', 1))
 
 
-def init_from_pretrain_model(args, exe, program):
-
-    assert isinstance(args.init_from_pretrain_model, str)
-
-    if not os.path.exists(args.init_from_pretrain_model):
-        raise Warning("The pretrained params do not exist.")
-        return False
-
-    def existed_params(var):
-        if not isinstance(var, fluid.framework.Parameter):
-            return False
-        return os.path.exists(
-            os.path.join(args.init_from_pretrain_model, var.name))
-
-    fluid.io.load_vars(
-        exe,
-        args.init_from_pretrain_model,
-        main_program=program,
-        predicate=existed_params)
-
-    print("finish initing model from pretrained params from %s" %
-          (args.init_from_pretrain_model))
-
-    return True
-
-
-def init_from_checkpoint(args, exe, program):
-
-    assert isinstance(args.init_from_checkpoint, str)
-
-    if not os.path.exists(args.init_from_checkpoint):
-        raise Warning("the checkpoint path does not exist.")
-        return False
-
-    fluid.io.load_persistables(
-        executor=exe,
-        dirname=args.init_from_checkpoint,
-        main_program=program,
-        filename="checkpoint.pdckpt")
-
-    print("finish initing model from checkpoint from %s" %
-          (args.init_from_checkpoint))
-
-    return True
-
-
-def save_checkpoint(args, exe, program, dirname):
-
-    assert isinstance(args.save_model_path, str)
-
-    checkpoint_dir = os.path.join(args.save_model_path, args.save_checkpoint)
-
-    if not os.path.exists(checkpoint_dir):
-        os.mkdir(checkpoint_dir)
-
-    fluid.io.save_persistables(
-        exe,
-        os.path.join(checkpoint_dir, dirname),
-        main_program=program,
-        filename="checkpoint.pdparams")
-
-    print("save checkpoint at %s" % (os.path.join(checkpoint_dir, dirname)))
-
-    return True
-
-
-def save_param(args, exe, program, dirname):
-
-    assert isinstance(args.save_model_path, str)
-
-    param_dir = os.path.join(args.save_model_path, args.save_param)
-
-    if not os.path.exists(param_dir):
-        os.mkdir(param_dir)
-
-    fluid.io.save_params(
-        exe,
-        os.path.join(param_dir, dirname),
-        main_program=program,
-        filename="params.pdparams")
-    print("save parameters at %s" % (os.path.join(param_dir, dirname)))
-
-    return True
-
-
 def do_train(args):
     if args.use_cuda:
         if num_trainers > 1:  # for multi-process gpu training
@@ -226,11 +142,17 @@ def do_train(args):
 
     ## init from some checkpoint, to resume the previous training
     if args.init_from_checkpoint:
-        init_from_checkpoint(args, exe, train_prog)
+        load(train_prog,
+             os.path.join(args.init_from_checkpoint, "transformer"), exe)
+        print("finish initing model from checkpoint from %s" %
+              (args.init_from_checkpoint))
 
     ## init from some pretrain models, to better solve the current task
     if args.init_from_pretrain_model:
-        init_from_pretrain_model(args, exe, train_prog)
+        load(train_prog,
+             os.path.join(args.init_from_pretrain_model, "transformer"), exe)
+        print("finish initing model from pretrained params from %s" %
+              (args.init_from_pretrain_model))
 
     build_strategy = fluid.compiler.BuildStrategy()
     build_strategy.enable_inplace = True
@@ -259,7 +181,7 @@ def do_train(args):
 
         batch_id = 0
         while True:
-            if args.max_iter and total_batch_num == args.max_iter: # this for benchmark
+            if args.max_iter and total_batch_num == args.max_iter:  # this for benchmark
                 return
             try:
                 outs = exe.run(compiled_train_prog,
@@ -293,18 +215,15 @@ def do_train(args):
                         avg_batch_time = time.time()
 
                 if step_idx % args.save_step == 0 and step_idx != 0:
-
-                    if args.save_checkpoint:
-                        save_checkpoint(args, exe, train_prog,
-                                        "step_" + str(step_idx))
-
-                    if args.save_param:
-                        save_param(args, exe, train_prog,
-                                   "step_" + str(step_idx))
+                    if args.save_model_path:
+                        model_path = os.path.join(args.save_model_path,
+                                                  "step_" + str(step_idx),
+                                                  "transformer")
+                        fluid.save(train_prog, model_path)
 
                 batch_id += 1
                 step_idx += 1
-                total_batch_num = total_batch_num + 1 # this is for benchmark
+                total_batch_num = total_batch_num + 1  # this is for benchmark
 
                 # profiler tools for benchmark
                 if args.is_profiler and pass_id == 0 and batch_id == args.print_step:
@@ -319,11 +238,10 @@ def do_train(args):
 
         time_consumed = time.time() - pass_start_time
 
-    if args.save_checkpoint:
-        save_checkpoint(args, exe, train_prog, "step_final")
-
-    if args.save_param:
-        save_param(args, exe, train_prog, "step_final")
+    if args.save_model_path:
+        model_path = os.path.join(args.save_model_path, "step_final",
+                                  "transformer")
+        fluid.save(train_prog, model_path)
 
     if args.enable_ce:  # For CE
         print("kpis\ttrain_cost_card%d\t%f" % (dev_count, total_avg_cost))
diff --git a/PaddleNLP/PaddleMT/transformer/transformer.py b/PaddleNLP/machine_translation/transformer/transformer.py
similarity index 81%
rename from PaddleNLP/PaddleMT/transformer/transformer.py
rename to PaddleNLP/machine_translation/transformer/transformer.py
index be20001b25fdb94fcc4bc234bae220413ddfacdd..a73ce9f3006a30c0183cb4a71c96ce36c9677b07 100644
--- a/PaddleNLP/PaddleMT/transformer/transformer.py
+++ b/PaddleNLP/machine_translation/transformer/transformer.py
@@ -17,6 +17,7 @@ import numpy as np
 
 import paddle.fluid as fluid
 import paddle.fluid.layers as layers
+from paddle.fluid.layers.utils import map_structure
 
 from desc import *
 
@@ -90,7 +91,6 @@ def multi_head_attention(queries,
                          n_head=1,
                          dropout_rate=0.,
                          cache=None,
-                         gather_idx=None,
                          static_kv=False):
     """
     Multi-Head Attention. Note that attn_bias is added to the logit before
@@ -161,30 +161,28 @@ def multi_head_attention(queries,
         v = transpose_layer(x=reshaped_v, perm=[0, 2, 1, 3])
 
         if cache is not None:  # only for faster inference
+            cache_, i = cache
             if static_kv:  # For encoder-decoder attention in inference
-                cache_k, cache_v = cache["static_k"], cache["static_v"]
-                # To init the static_k and static_v in cache.
-                # Maybe we can use condition_op(if_else) to do these at the first
-                # step in while loop to replace these, however it might be less
-                # efficient.
+                cache_k, cache_v = cache_["static_k"], cache_["static_v"]
+                # To init the static_k and static_v in global block.
                 static_cache_init = wrap_layer_with_block(
                     layers.assign,
                     fluid.default_main_program().current_block().parent_idx)
-                static_cache_init(k, cache_k)
-                static_cache_init(v, cache_v)
+                static_cache_init(
+                    k,
+                    fluid.default_main_program().global_block().var(
+                        "static_k_%d" % i))
+                static_cache_init(
+                    v,
+                    fluid.default_main_program().global_block().var(
+                        "static_v_%d" % i))
+                k, v = cache_k, cache_v
             else:  # For decoder self-attention in inference
-                cache_k, cache_v = cache["k"], cache["v"]
-            # gather cell states corresponding to selected parent
-            select_k = layers.gather(cache_k, index=gather_idx)
-            select_v = layers.gather(cache_v, index=gather_idx)
-            if not static_kv:
-                # For self attention in inference, use cache and concat time steps.
-                select_k = layers.concat([select_k, k], axis=2)
-                select_v = layers.concat([select_v, v], axis=2)
-            # update cell states(caches) cached in global block
-            layers.assign(select_k, cache_k)
-            layers.assign(select_v, cache_v)
-            return q, select_k, select_v
+                # use cache and concat time steps.
+                cache_k, cache_v = cache_["k"], cache_["v"]
+                k = layers.concat([cache_k, k], axis=2)
+                v = layers.concat([cache_v, v], axis=2)
+                cache_["k"], cache_["v"] = (k, v)
         return q, k, v
 
     def __combine_heads(x):
@@ -301,15 +299,16 @@ def prepare_encoder_decoder(src_word,
         src_word,
         size=[src_vocab_size, src_emb_dim],
         padding_idx=bos_idx,  # set embedding of bos to 0
-        param_attr=fluid.ParamAttr(name=word_emb_param_name,
-                                   initializer=fluid.initializer.Normal(
-                                       0., src_emb_dim**-0.5)))
+        param_attr=fluid.ParamAttr(
+            name=word_emb_param_name,
+            initializer=fluid.initializer.Normal(0., src_emb_dim**-0.5)))
 
     src_word_emb = layers.scale(x=src_word_emb, scale=src_emb_dim**0.5)
-    src_pos_enc = fluid.embedding(src_pos,
-                                  size=[src_max_len, src_emb_dim],
-                                  param_attr=fluid.ParamAttr(
-                                      name=pos_enc_param_name, trainable=False))
+    src_pos_enc = fluid.embedding(
+        src_pos,
+        size=[src_max_len, src_emb_dim],
+        param_attr=fluid.ParamAttr(
+            name=pos_enc_param_name, trainable=False))
     src_pos_enc.stop_gradient = True
     enc_input = src_word_emb + src_pos_enc
     return layers.dropout(
@@ -405,8 +404,7 @@ def decoder_layer(dec_input,
                   relu_dropout,
                   preprocess_cmd,
                   postprocess_cmd,
-                  cache=None,
-                  gather_idx=None):
+                  cache=None):
     """ The layer to be stacked in decoder part.
     The structure of this module is similar to that in the encoder part except
     a multi-head attention is added to implement encoder-decoder attention.
@@ -421,8 +419,7 @@ def decoder_layer(dec_input,
         d_model,
         n_head,
         attention_dropout,
-        cache=cache,
-        gather_idx=gather_idx)
+        cache=cache)
     slf_attn_output = post_process_layer(
         dec_input,
         slf_attn_output,
@@ -440,7 +437,6 @@ def decoder_layer(dec_input,
         n_head,
         attention_dropout,
         cache=cache,
-        gather_idx=gather_idx,
         static_kv=True)
     enc_attn_output = post_process_layer(
         slf_attn_output,
@@ -476,8 +472,7 @@ def decoder(dec_input,
             relu_dropout,
             preprocess_cmd,
             postprocess_cmd,
-            caches=None,
-            gather_idx=None):
+            caches=None):
     """
     The decoder is composed of a stack of identical decoder_layer layers.
     """
@@ -497,8 +492,7 @@ def decoder(dec_input,
             relu_dropout,
             preprocess_cmd,
             postprocess_cmd,
-            cache=None if caches is None else caches[i],
-            gather_idx=gather_idx)
+            cache=None if caches is None else (caches[i], i))
         dec_input = dec_output
     dec_output = pre_process_layer(dec_output, preprocess_cmd,
                                    prepostprocess_dropout)
@@ -536,48 +530,51 @@ def transformer(model_input,
     label = model_input.lbl_word
     weights = model_input.lbl_weight
 
-    enc_output = wrap_encoder(enc_inputs,
-                              src_vocab_size,
-                              max_length,
-                              n_layer,
-                              n_head,
-                              d_key,
-                              d_value,
-                              d_model,
-                              d_inner_hid,
-                              prepostprocess_dropout,
-                              attention_dropout,
-                              relu_dropout,
-                              preprocess_cmd,
-                              postprocess_cmd,
-                              weight_sharing,
-                              bos_idx=bos_idx)
-
-    predict = wrap_decoder(dec_inputs,
-                           trg_vocab_size,
-                           max_length,
-                           n_layer,
-                           n_head,
-                           d_key,
-                           d_value,
-                           d_model,
-                           d_inner_hid,
-                           prepostprocess_dropout,
-                           attention_dropout,
-                           relu_dropout,
-                           preprocess_cmd,
-                           postprocess_cmd,
-                           weight_sharing,
-                           enc_output=enc_output)
+    enc_output = wrap_encoder(
+        enc_inputs,
+        src_vocab_size,
+        max_length,
+        n_layer,
+        n_head,
+        d_key,
+        d_value,
+        d_model,
+        d_inner_hid,
+        prepostprocess_dropout,
+        attention_dropout,
+        relu_dropout,
+        preprocess_cmd,
+        postprocess_cmd,
+        weight_sharing,
+        bos_idx=bos_idx)
+
+    predict = wrap_decoder(
+        dec_inputs,
+        trg_vocab_size,
+        max_length,
+        n_layer,
+        n_head,
+        d_key,
+        d_value,
+        d_model,
+        d_inner_hid,
+        prepostprocess_dropout,
+        attention_dropout,
+        relu_dropout,
+        preprocess_cmd,
+        postprocess_cmd,
+        weight_sharing,
+        enc_output=enc_output)
 
     # Padding index do not contribute to the total loss. The weights is used to
     # cancel padding index in calculating the loss.
     if label_smooth_eps:
         # TODO: use fluid.input.one_hot after softmax_with_cross_entropy removing
         # the enforcement that the last dimension of label must be 1.
-        label = layers.label_smooth(label=layers.one_hot(input=label,
-                                                         depth=trg_vocab_size),
-                                    epsilon=label_smooth_eps)
+        label = layers.label_smooth(
+            label=layers.one_hot(
+                input=label, depth=trg_vocab_size),
+            epsilon=label_smooth_eps)
 
     cost = layers.softmax_with_cross_entropy(
         logits=predict,
@@ -654,7 +651,6 @@ def wrap_decoder(dec_inputs,
                  weight_sharing,
                  enc_output=None,
                  caches=None,
-                 gather_idx=None,
                  bos_idx=0):
     """
     The wrapper assembles together all needed layers for the decoder.
@@ -687,8 +683,7 @@ def wrap_decoder(dec_inputs,
         relu_dropout,
         preprocess_cmd,
         postprocess_cmd,
-        caches=caches,
-        gather_idx=gather_idx)
+        caches=caches)
     # Reshape to 2D tensor to use GEMM instead of BatchedGEMM
     dec_output = layers.reshape(
         dec_output, shape=[-1, dec_output.shape[-1]], inplace=True)
@@ -722,22 +717,23 @@ def fast_decode(model_input, src_vocab_size, trg_vocab_size, max_in_len,
     dec_inputs = (model_input.trg_word, model_input.init_score,
                   model_input.init_idx, model_input.trg_src_attn_bias)
 
-    enc_output = wrap_encoder(enc_inputs,
-                              src_vocab_size,
-                              max_in_len,
-                              n_layer,
-                              n_head,
-                              d_key,
-                              d_value,
-                              d_model,
-                              d_inner_hid,
-                              prepostprocess_dropout,
-                              attention_dropout,
-                              relu_dropout,
-                              preprocess_cmd,
-                              postprocess_cmd,
-                              weight_sharing,
-                              bos_idx=bos_idx)
+    enc_output = wrap_encoder(
+        enc_inputs,
+        src_vocab_size,
+        max_in_len,
+        n_layer,
+        n_head,
+        d_key,
+        d_value,
+        d_model,
+        d_inner_hid,
+        prepostprocess_dropout,
+        attention_dropout,
+        relu_dropout,
+        preprocess_cmd,
+        postprocess_cmd,
+        weight_sharing,
+        bos_idx=bos_idx)
     start_tokens, init_scores, parent_idx, trg_src_attn_bias = dec_inputs
 
     def beam_search():
@@ -748,8 +744,6 @@ def fast_decode(model_input, src_vocab_size, trg_vocab_size, max_in_len,
             force_cpu=True)
         step_idx = layers.fill_constant(
             shape=[1], dtype=start_tokens.dtype, value=0, force_cpu=True)
-        cond = layers.less_than(x=step_idx, y=max_len)  # default force_cpu=True
-        while_op = layers.While(cond)
         # array states will be stored for each step.
         ids = layers.array_write(
             layers.reshape(start_tokens, (-1, 1)), step_idx)
@@ -773,21 +767,31 @@ def fast_decode(model_input, src_vocab_size, trg_vocab_size, max_in_len,
                     dtype=enc_output.dtype,
                     value=0),
                 "static_k":  # for encoder-decoder attention
-                layers.create_tensor(dtype=enc_output.dtype),
+                fluid.data(
+                    shape=[None, n_head, 0, d_key],
+                    dtype=enc_output.dtype,
+                    name=("static_k_%d" % i)),
                 "static_v":  # for encoder-decoder attention
-                layers.create_tensor(dtype=enc_output.dtype)
+                fluid.data(
+                    shape=[None, n_head, 0, d_value],
+                    dtype=enc_output.dtype,
+                    name=("static_v_%d" % i)),
             } for i in range(n_layer)
         ]
 
-        with while_op.block():
-            pre_ids = layers.array_read(array=ids, i=step_idx)
-            # Since beam_search_op dosen't enforce pre_ids' shape, we can do
-            # inplace reshape here which actually change the shape of pre_ids.
-            # pre_ids = layers.reshape(pre_ids, (-1, 1, 1), inplace=True)
-            pre_scores = layers.array_read(array=scores, i=step_idx)
+        def cond_func(step_idx, selected_ids, selected_scores, gather_idx,
+                      caches, trg_src_attn_bias):
+            length_cond = layers.less_than(x=step_idx, y=max_len)
+            finish_cond = layers.logical_not(layers.is_empty(x=selected_ids))
+            return layers.logical_and(x=length_cond, y=finish_cond)
+
+        def body_func(step_idx, pre_ids, pre_scores, gather_idx, caches,
+                      trg_src_attn_bias):
             # gather cell states corresponding to selected parent
+            pre_caches = map_structure(
+                lambda x: layers.gather(x, index=gather_idx), caches)
             pre_src_attn_bias = layers.gather(
-                trg_src_attn_bias, index=parent_idx)
+                trg_src_attn_bias, index=gather_idx)
             pre_pos = layers.elementwise_mul(
                 x=layers.fill_constant_batch_size_like(
                     input=pre_src_attn_bias,  # cann't use lod tensor here
@@ -796,25 +800,25 @@ def fast_decode(model_input, src_vocab_size, trg_vocab_size, max_in_len,
                     dtype=pre_ids.dtype),
                 y=step_idx,
                 axis=0)
-            logits = wrap_decoder((pre_ids, pre_pos, None, pre_src_attn_bias),
-                                  trg_vocab_size,
-                                  max_in_len,
-                                  n_layer,
-                                  n_head,
-                                  d_key,
-                                  d_value,
-                                  d_model,
-                                  d_inner_hid,
-                                  prepostprocess_dropout,
-                                  attention_dropout,
-                                  relu_dropout,
-                                  preprocess_cmd,
-                                  postprocess_cmd,
-                                  weight_sharing,
-                                  enc_output=enc_output,
-                                  caches=caches,
-                                  gather_idx=parent_idx,
-                                  bos_idx=bos_idx)
+            logits = wrap_decoder(
+                (pre_ids, pre_pos, None, pre_src_attn_bias),
+                trg_vocab_size,
+                max_in_len,
+                n_layer,
+                n_head,
+                d_key,
+                d_value,
+                d_model,
+                d_inner_hid,
+                prepostprocess_dropout,
+                attention_dropout,
+                relu_dropout,
+                preprocess_cmd,
+                postprocess_cmd,
+                weight_sharing,
+                enc_output=enc_output,
+                caches=pre_caches,
+                bos_idx=bos_idx)
             # intra-beam topK
             topk_scores, topk_indices = layers.topk(
                 input=layers.softmax(logits), k=beam_size)
@@ -832,16 +836,20 @@ def fast_decode(model_input, src_vocab_size, trg_vocab_size, max_in_len,
                 beam_size=beam_size,
                 end_id=eos_idx,
                 return_parent_idx=True)
-            layers.increment(x=step_idx, value=1.0, in_place=True)
-            # cell states(caches) have been updated in wrap_decoder,
-            # only need to update beam search states here.
+            step_idx = layers.increment(x=step_idx, value=1.0, in_place=False)
             layers.array_write(selected_ids, i=step_idx, array=ids)
             layers.array_write(selected_scores, i=step_idx, array=scores)
-            layers.assign(gather_idx, parent_idx)
-            layers.assign(pre_src_attn_bias, trg_src_attn_bias)
-            length_cond = layers.less_than(x=step_idx, y=max_len)
-            finish_cond = layers.logical_not(layers.is_empty(x=selected_ids))
-            layers.logical_and(x=length_cond, y=finish_cond, out=cond)
+            return (step_idx, selected_ids, selected_scores, gather_idx,
+                    pre_caches, pre_src_attn_bias)
+
+        _ = layers.while_loop(
+            cond=cond_func,
+            body=body_func,
+            loop_vars=[
+                step_idx, start_tokens, init_scores, parent_idx, caches,
+                trg_src_attn_bias
+            ],
+            is_test=True)
 
         finished_ids, finished_scores = layers.beam_search_decode(
             ids, scores, beam_size=beam_size, end_id=eos_idx)
diff --git a/PaddleNLP/PaddleMT/transformer/transformer.yaml b/PaddleNLP/machine_translation/transformer/transformer.yaml
similarity index 95%
rename from PaddleNLP/PaddleMT/transformer/transformer.yaml
rename to PaddleNLP/machine_translation/transformer/transformer.yaml
index c6cbc074ed8a76c8b4d649e7631f0c125e165511..521396925f7e2d4721cab0566fa78e0dc68d6f99 100644
--- a/PaddleNLP/PaddleMT/transformer/transformer.yaml
+++ b/PaddleNLP/machine_translation/transformer/transformer.yaml
@@ -11,10 +11,11 @@ init_from_checkpoint: ""
 init_from_pretrain_model: ""
 # path of trained parameter, to make prediction
 init_from_params: "trained_params/step_100000"
-save_model_path: ""
-# the directory for saving checkpoints.
+# the directory for saving models.
+save_model_path: "saved_models"
+# deprecated, the directory for saving checkpoints.
 save_checkpoint: "trained_ckpts"
-# the directory for saving trained parameters.
+# deprecated, the directory for saving trained parameters.
 save_param: "trained_params"
 # the directory for saving inference model.
 inference_model_dir: "infer_model"
diff --git a/PaddleNLP/PaddleLARK/BERT/model/__init__.py b/PaddleNLP/machine_translation/transformer/utils/__init__.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/BERT/model/__init__.py
rename to PaddleNLP/machine_translation/transformer/utils/__init__.py
diff --git a/PaddleNLP/PaddleMT/transformer/utils/check.py b/PaddleNLP/machine_translation/transformer/utils/check.py
similarity index 100%
rename from PaddleNLP/PaddleMT/transformer/utils/check.py
rename to PaddleNLP/machine_translation/transformer/utils/check.py
diff --git a/PaddleNLP/PaddleMT/transformer/utils/configure.py b/PaddleNLP/machine_translation/transformer/utils/configure.py
similarity index 96%
rename from PaddleNLP/PaddleMT/transformer/utils/configure.py
rename to PaddleNLP/machine_translation/transformer/utils/configure.py
index 67e601282fee572518435eaed38a4ed8e26fc5f9..874b69ba8f034b379a6854cdc41da46fb60fcc52 100644
--- a/PaddleNLP/PaddleMT/transformer/utils/configure.py
+++ b/PaddleNLP/machine_translation/transformer/utils/configure.py
@@ -199,9 +199,14 @@ class PDConfig(object):
                                "Whether to perform model saving for inference.")
 
         # NOTE: args for profiler
-        self.default_g.add_arg("is_profiler", int, 0, "the switch of profiler tools. (used for benchmark)")
-        self.default_g.add_arg("profiler_path", str, './', "the profiler output file path. (used for benchmark)")
-        self.default_g.add_arg("max_iter", int, 0, "the max train batch num.(used for benchmark)")
+        self.default_g.add_arg(
+            "is_profiler", int, 0,
+            "the switch of profiler tools. (used for benchmark)")
+        self.default_g.add_arg(
+            "profiler_path", str, './',
+            "the profiler output file path. (used for benchmark)")
+        self.default_g.add_arg("max_iter", int, 0,
+                               "the max train batch num.(used for benchmark)")
 
         self.parser = parser
 
diff --git a/PaddleNLP/PaddleMT/transformer/utils/dist_utils.py b/PaddleNLP/machine_translation/transformer/utils/dist_utils.py
similarity index 100%
rename from PaddleNLP/PaddleMT/transformer/utils/dist_utils.py
rename to PaddleNLP/machine_translation/transformer/utils/dist_utils.py
diff --git a/PaddleNLP/PaddleMT/transformer/utils/input_field.py b/PaddleNLP/machine_translation/transformer/utils/input_field.py
similarity index 100%
rename from PaddleNLP/PaddleMT/transformer/utils/input_field.py
rename to PaddleNLP/machine_translation/transformer/utils/input_field.py
diff --git a/PaddleNLP/machine_translation/transformer/utils/load.py b/PaddleNLP/machine_translation/transformer/utils/load.py
new file mode 100644
index 0000000000000000000000000000000000000000..24c5fccc59cc13959b8696eaa819613e29ee4eb8
--- /dev/null
+++ b/PaddleNLP/machine_translation/transformer/utils/load.py
@@ -0,0 +1,24 @@
+import pickle
+import six
+import warnings
+from functools import partial
+
+import paddle.fluid as fluid
+
+
+def load(program, model_path, executor=None, var_list=None):
+    """
+    To load python2 saved models in python3.
+    """
+    try:
+        fluid.load(program, model_path, executor, var_list)
+    except UnicodeDecodeError:
+        warnings.warn(
+            "An UnicodeDecodeError is catched, which might be caused by loading "
+            "a python2 saved model. Encoding of pickle.load would be set and "
+            "load again automatically.")
+        if six.PY3:
+            load_bak = pickle.load
+            pickle.load = partial(load_bak, encoding="latin1")
+            fluid.load(program, model_path, executor, var_list)
+            pickle.load = load_bak
diff --git a/PaddleNLP/PaddleLARK/BERT/.run_ce.sh b/PaddleNLP/pretrain_langauge_models/BERT/.run_ce.sh
similarity index 100%
rename from PaddleNLP/PaddleLARK/BERT/.run_ce.sh
rename to PaddleNLP/pretrain_langauge_models/BERT/.run_ce.sh
diff --git a/PaddleNLP/PaddleLARK/BERT/README.md b/PaddleNLP/pretrain_langauge_models/BERT/README.md
similarity index 98%
rename from PaddleNLP/PaddleLARK/BERT/README.md
rename to PaddleNLP/pretrain_langauge_models/BERT/README.md
index 47ceb84eef23fc0d621da5a839e80ebb10ccf4a0..30e9b28e8600647f4c2a94f2030d1272e6ca4670 100644
--- a/PaddleNLP/PaddleLARK/BERT/README.md
+++ b/PaddleNLP/pretrain_langauge_models/BERT/README.md
@@ -22,6 +22,8 @@
 | :------| :------: | :------: |:------: |:------: |
 | [BERT-Large, Uncased (Whole Word Masking)](https://bert-models.bj.bcebos.com/wwm_uncased_L-24_H-1024_A-16.tar.gz)| 24 | 1024 | 16 | 340M |
 | [BERT-Large, Cased (Whole Word Masking)](https://bert-models.bj.bcebos.com/wwm_cased_L-24_H-1024_A-16.tar.gz)| 24 | 1024 | 16 | 340M |
+| [RoBERTa-Base, Chinese](https://bert-models.bj.bcebos.com/chinese_roberta_wwm_ext_L-12_H-768_A-12.tar.gz) | 12 | 768 |12 |110M |
+| [RoBERTa-Large, Chinese](https://bert-models.bj.bcebos.com/chinese_roberta_wwm_large_ext_L-24_H-1024_A-16.tar.gz) | 24 | 1024 |16 |340M |
 | [BERT-Base, Uncased](https://bert-models.bj.bcebos.com/uncased_L-12_H-768_A-12.tar.gz) | 12 | 768 |12 |110M |
 | [BERT-Large, Uncased](https://bert-models.bj.bcebos.com/uncased_L-24_H-1024_A-16.tar.gz) | 24 | 1024 |16 |340M |
 |[BERT-Base, Cased](https://bert-models.bj.bcebos.com/cased_L-12_H-768_A-12.tar.gz)|12|768|12|110M|
@@ -415,5 +417,3 @@ for (size_t i = 0; i < output.front().data.length() / sizeof(float); i += 3) {
             << static_cast(output.front().data.data())[i + 2] << std::endl;
 }
 ```
-
-
diff --git a/PaddleNLP/PaddleLARK/BERT/reader/__init__.py b/PaddleNLP/pretrain_langauge_models/BERT/__init__.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/BERT/reader/__init__.py
rename to PaddleNLP/pretrain_langauge_models/BERT/__init__.py
diff --git a/PaddleNLP/PaddleLARK/BERT/_ce.py b/PaddleNLP/pretrain_langauge_models/BERT/_ce.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/BERT/_ce.py
rename to PaddleNLP/pretrain_langauge_models/BERT/_ce.py
diff --git a/PaddleNLP/PaddleLARK/BERT/batching.py b/PaddleNLP/pretrain_langauge_models/BERT/batching.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/BERT/batching.py
rename to PaddleNLP/pretrain_langauge_models/BERT/batching.py
diff --git a/PaddleNLP/PaddleLARK/BERT/convert_params.py b/PaddleNLP/pretrain_langauge_models/BERT/convert_params.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/BERT/convert_params.py
rename to PaddleNLP/pretrain_langauge_models/BERT/convert_params.py
diff --git a/PaddleNLP/PaddleLARK/BERT/data/demo_config/bert_config.json b/PaddleNLP/pretrain_langauge_models/BERT/data/demo_config/bert_config.json
similarity index 100%
rename from PaddleNLP/PaddleLARK/BERT/data/demo_config/bert_config.json
rename to PaddleNLP/pretrain_langauge_models/BERT/data/demo_config/bert_config.json
diff --git a/PaddleNLP/PaddleLARK/BERT/data/demo_config/vocab.txt b/PaddleNLP/pretrain_langauge_models/BERT/data/demo_config/vocab.txt
similarity index 100%
rename from PaddleNLP/PaddleLARK/BERT/data/demo_config/vocab.txt
rename to PaddleNLP/pretrain_langauge_models/BERT/data/demo_config/vocab.txt
diff --git a/PaddleNLP/PaddleLARK/BERT/data/demo_wiki_tokens.txt b/PaddleNLP/pretrain_langauge_models/BERT/data/demo_wiki_tokens.txt
similarity index 100%
rename from PaddleNLP/PaddleLARK/BERT/data/demo_wiki_tokens.txt
rename to PaddleNLP/pretrain_langauge_models/BERT/data/demo_wiki_tokens.txt
diff --git a/PaddleNLP/PaddleLARK/BERT/data/train/demo_wiki_train.gz b/PaddleNLP/pretrain_langauge_models/BERT/data/train/demo_wiki_train.gz
similarity index 100%
rename from PaddleNLP/PaddleLARK/BERT/data/train/demo_wiki_train.gz
rename to PaddleNLP/pretrain_langauge_models/BERT/data/train/demo_wiki_train.gz
diff --git a/PaddleNLP/PaddleLARK/BERT/data/validation/demo_wiki_validation.gz b/PaddleNLP/pretrain_langauge_models/BERT/data/validation/demo_wiki_validation.gz
similarity index 100%
rename from PaddleNLP/PaddleLARK/BERT/data/validation/demo_wiki_validation.gz
rename to PaddleNLP/pretrain_langauge_models/BERT/data/validation/demo_wiki_validation.gz
diff --git a/PaddleNLP/PaddleLARK/BERT/dist_utils.py b/PaddleNLP/pretrain_langauge_models/BERT/dist_utils.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/BERT/dist_utils.py
rename to PaddleNLP/pretrain_langauge_models/BERT/dist_utils.py
diff --git a/PaddleNLP/PaddleLARK/BERT/inference/CMakeLists.txt b/PaddleNLP/pretrain_langauge_models/BERT/inference/CMakeLists.txt
similarity index 100%
rename from PaddleNLP/PaddleLARK/BERT/inference/CMakeLists.txt
rename to PaddleNLP/pretrain_langauge_models/BERT/inference/CMakeLists.txt
diff --git a/PaddleNLP/PaddleLARK/BERT/inference/README.md b/PaddleNLP/pretrain_langauge_models/BERT/inference/README.md
similarity index 100%
rename from PaddleNLP/PaddleLARK/BERT/inference/README.md
rename to PaddleNLP/pretrain_langauge_models/BERT/inference/README.md
diff --git a/PaddleNLP/PaddleLARK/BERT/inference/gen_demo_data.py b/PaddleNLP/pretrain_langauge_models/BERT/inference/gen_demo_data.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/BERT/inference/gen_demo_data.py
rename to PaddleNLP/pretrain_langauge_models/BERT/inference/gen_demo_data.py
diff --git a/PaddleNLP/PaddleLARK/BERT/inference/inference.cc b/PaddleNLP/pretrain_langauge_models/BERT/inference/inference.cc
similarity index 100%
rename from PaddleNLP/PaddleLARK/BERT/inference/inference.cc
rename to PaddleNLP/pretrain_langauge_models/BERT/inference/inference.cc
diff --git a/PaddleNLP/PaddleLARK/BERT/utils/__init__.py b/PaddleNLP/pretrain_langauge_models/BERT/model/__init__.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/BERT/utils/__init__.py
rename to PaddleNLP/pretrain_langauge_models/BERT/model/__init__.py
diff --git a/PaddleNLP/PaddleLARK/BERT/model/bert.py b/PaddleNLP/pretrain_langauge_models/BERT/model/bert.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/BERT/model/bert.py
rename to PaddleNLP/pretrain_langauge_models/BERT/model/bert.py
diff --git a/PaddleNLP/PaddleLARK/BERT/model/classifier.py b/PaddleNLP/pretrain_langauge_models/BERT/model/classifier.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/BERT/model/classifier.py
rename to PaddleNLP/pretrain_langauge_models/BERT/model/classifier.py
diff --git a/PaddleNLP/PaddleLARK/BERT/model/transformer_encoder.py b/PaddleNLP/pretrain_langauge_models/BERT/model/transformer_encoder.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/BERT/model/transformer_encoder.py
rename to PaddleNLP/pretrain_langauge_models/BERT/model/transformer_encoder.py
diff --git a/PaddleNLP/PaddleLARK/BERT/optimization.py b/PaddleNLP/pretrain_langauge_models/BERT/optimization.py
similarity index 98%
rename from PaddleNLP/PaddleLARK/BERT/optimization.py
rename to PaddleNLP/pretrain_langauge_models/BERT/optimization.py
index 0771ab77922dea90104ff67ab201bfb307212637..82ade38974e40cd46dd2f0bf2e37e99e412dd7e2 100644
--- a/PaddleNLP/PaddleLARK/BERT/optimization.py
+++ b/PaddleNLP/pretrain_langauge_models/BERT/optimization.py
@@ -158,7 +158,7 @@ def optimization(loss,
 
     else:
         if weight_decay > 0:
-            for param in train_program.global_block().all_parameters():
+            for param in train_program.all_parameters():
                 param_list[param.name] = param * 1.0
                 param_list[param.name].stop_gradient = True
 
diff --git a/PaddleNLP/PaddleLARK/BERT/predict_classifier.py b/PaddleNLP/pretrain_langauge_models/BERT/predict_classifier.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/BERT/predict_classifier.py
rename to PaddleNLP/pretrain_langauge_models/BERT/predict_classifier.py
diff --git a/PaddleNLP/PaddleLARK/ELMo/__init__.py b/PaddleNLP/pretrain_langauge_models/BERT/reader/__init__.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/ELMo/__init__.py
rename to PaddleNLP/pretrain_langauge_models/BERT/reader/__init__.py
diff --git a/PaddleNLP/PaddleLARK/BERT/reader/cls.py b/PaddleNLP/pretrain_langauge_models/BERT/reader/cls.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/BERT/reader/cls.py
rename to PaddleNLP/pretrain_langauge_models/BERT/reader/cls.py
diff --git a/PaddleNLP/PaddleLARK/BERT/reader/pretraining.py b/PaddleNLP/pretrain_langauge_models/BERT/reader/pretraining.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/BERT/reader/pretraining.py
rename to PaddleNLP/pretrain_langauge_models/BERT/reader/pretraining.py
diff --git a/PaddleNLP/PaddleLARK/BERT/reader/squad.py b/PaddleNLP/pretrain_langauge_models/BERT/reader/squad.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/BERT/reader/squad.py
rename to PaddleNLP/pretrain_langauge_models/BERT/reader/squad.py
diff --git a/PaddleNLP/PaddleLARK/BERT/run_classifier.py b/PaddleNLP/pretrain_langauge_models/BERT/run_classifier.py
similarity index 99%
rename from PaddleNLP/PaddleLARK/BERT/run_classifier.py
rename to PaddleNLP/pretrain_langauge_models/BERT/run_classifier.py
index 221a14f768097afa8da3f96cd7e8c0f8690d1a99..ddf4f71a4927a7822cc1fa6cb55e0ba2c77866cf 100644
--- a/PaddleNLP/PaddleLARK/BERT/run_classifier.py
+++ b/PaddleNLP/pretrain_langauge_models/BERT/run_classifier.py
@@ -392,7 +392,7 @@ def main(args):
                 if steps % args.save_steps == 0:
                     save_path = os.path.join(args.checkpoints,
                                              "step_" + str(steps))
-                    fluid.io.save_persistables(exe, save_path, train_program)
+                    fluid.save(program=train_program, model_path=save_path)
 
                 if steps % args.validation_steps == 0:
                     print("Average throughtput: %s" % (np.average(throughput)))
@@ -409,7 +409,7 @@ def main(args):
                                  "test")
             except fluid.core.EOFException:
                 save_path = os.path.join(args.checkpoints, "step_" + str(steps))
-                fluid.io.save_persistables(exe, save_path, train_program)
+                fluid.save(program=train_program, model_path=save_path)
                 train_data_loader.reset()
                 break
         if args.enable_ce:
diff --git a/PaddleNLP/PaddleLARK/BERT/run_squad.py b/PaddleNLP/pretrain_langauge_models/BERT/run_squad.py
similarity index 99%
rename from PaddleNLP/PaddleLARK/BERT/run_squad.py
rename to PaddleNLP/pretrain_langauge_models/BERT/run_squad.py
index e005b2439113d6a0d20a6cc145f3d0110f474af2..dcf93332e759fd95177ddbd81b1ee3073c72e0a9 100644
--- a/PaddleNLP/PaddleLARK/BERT/run_squad.py
+++ b/PaddleNLP/pretrain_langauge_models/BERT/run_squad.py
@@ -398,11 +398,11 @@ def train(args):
                 if steps % args.save_steps == 0 or steps == max_train_steps:
                     save_path = os.path.join(args.checkpoints,
                                              "step_" + str(steps))
-                    fluid.io.save_persistables(exe, save_path, train_program)
+                    fluid.save(program=train_program, model_path=save_path)
             except fluid.core.EOFException:
                 save_path = os.path.join(args.checkpoints,
                                          "step_" + str(steps) + "_final")
-                fluid.io.save_persistables(exe, save_path, train_program)
+                fluid.save(program=train_program, model_path=save_path)
                 train_data_loader.reset()
                 break
 
diff --git a/PaddleNLP/PaddleLARK/BERT/test_local_dist.sh b/PaddleNLP/pretrain_langauge_models/BERT/test_local_dist.sh
similarity index 100%
rename from PaddleNLP/PaddleLARK/BERT/test_local_dist.sh
rename to PaddleNLP/pretrain_langauge_models/BERT/test_local_dist.sh
diff --git a/PaddleNLP/PaddleLARK/BERT/tokenization.py b/PaddleNLP/pretrain_langauge_models/BERT/tokenization.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/BERT/tokenization.py
rename to PaddleNLP/pretrain_langauge_models/BERT/tokenization.py
diff --git a/PaddleNLP/PaddleLARK/BERT/train.py b/PaddleNLP/pretrain_langauge_models/BERT/train.py
similarity index 99%
rename from PaddleNLP/PaddleLARK/BERT/train.py
rename to PaddleNLP/pretrain_langauge_models/BERT/train.py
index f9a1fb49ef4fda9957f27e90b7a18cea6f6d4e4b..2a1704c967ce88dfb33206d48d5ca0062a691090 100644
--- a/PaddleNLP/PaddleLARK/BERT/train.py
+++ b/PaddleNLP/pretrain_langauge_models/BERT/train.py
@@ -412,7 +412,7 @@ def train(args):
 
             if steps % args.save_steps == 0:
                 save_path = os.path.join(args.checkpoints, "step_" + str(steps))
-                fluid.io.save_persistables(exe, save_path, train_program)
+                fluid.save(program=train_program, model_path=save_path)
 
             if args.validation_set_dir and steps % args.validation_steps == 0:
                 vali_cost, vali_lm_cost, vali_acc, vali_steps, vali_speed = predict(
diff --git a/PaddleNLP/PaddleLARK/BERT/train.sh b/PaddleNLP/pretrain_langauge_models/BERT/train.sh
similarity index 100%
rename from PaddleNLP/PaddleLARK/BERT/train.sh
rename to PaddleNLP/pretrain_langauge_models/BERT/train.sh
diff --git a/PaddleNLP/PaddleLARK/ELMo/utils/__init__.py b/PaddleNLP/pretrain_langauge_models/BERT/utils/__init__.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/ELMo/utils/__init__.py
rename to PaddleNLP/pretrain_langauge_models/BERT/utils/__init__.py
diff --git a/PaddleNLP/PaddleLARK/BERT/utils/args.py b/PaddleNLP/pretrain_langauge_models/BERT/utils/args.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/BERT/utils/args.py
rename to PaddleNLP/pretrain_langauge_models/BERT/utils/args.py
diff --git a/PaddleNLP/PaddleLARK/BERT/utils/cards.py b/PaddleNLP/pretrain_langauge_models/BERT/utils/cards.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/BERT/utils/cards.py
rename to PaddleNLP/pretrain_langauge_models/BERT/utils/cards.py
diff --git a/PaddleNLP/PaddleLARK/BERT/utils/fp16.py b/PaddleNLP/pretrain_langauge_models/BERT/utils/fp16.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/BERT/utils/fp16.py
rename to PaddleNLP/pretrain_langauge_models/BERT/utils/fp16.py
diff --git a/PaddleNLP/PaddleLARK/BERT/utils/init.py b/PaddleNLP/pretrain_langauge_models/BERT/utils/init.py
similarity index 62%
rename from PaddleNLP/PaddleLARK/BERT/utils/init.py
rename to PaddleNLP/pretrain_langauge_models/BERT/utils/init.py
index df2406b5c52e04215634ba0b6f6e4c554eadf0d6..b6d15f9b04db9c8fb10137737b617445b3c78230 100644
--- a/PaddleNLP/PaddleLARK/BERT/utils/init.py
+++ b/PaddleNLP/pretrain_langauge_models/BERT/utils/init.py
@@ -25,7 +25,7 @@ import paddle.fluid as fluid
 
 def cast_fp32_to_fp16(exe, main_program):
     print("Cast parameters to float16 data format.")
-    for param in main_program.global_block().all_parameters():
+    for param in main_program.all_parameters():
         if not param.name.endswith(".master"):
             param_t = fluid.global_scope().find_var(param.name).get_tensor()
             data = np.array(param_t)
@@ -38,21 +38,9 @@ def cast_fp32_to_fp16(exe, main_program):
 
 
 def init_checkpoint(exe, init_checkpoint_path, main_program, use_fp16=False):
-    assert os.path.exists(
-        init_checkpoint_path), "[%s] cann't be found." % init_checkpoint_path
+    fluid.load(
+        program=main_program, model_path=init_checkpoint_path, executor=exe)
 
-    def existed_persitables(var):
-        if not fluid.io.is_persistable(var):
-            return False
-        if os.path.exists(os.path.join(init_checkpoint_path, var.name)):
-            print("INIT {}".format(var.name))
-            return True
-
-    fluid.io.load_vars(
-        exe,
-        init_checkpoint_path,
-        main_program=main_program,
-        predicate=existed_persitables)
     print("Load model from {}".format(init_checkpoint_path))
 
     if use_fp16:
@@ -63,24 +51,8 @@ def init_pretraining_params(exe,
                             pretraining_params_path,
                             main_program,
                             use_fp16=False):
-    assert os.path.exists(pretraining_params_path
-                          ), "[%s] cann't be found." % pretraining_params_path
-
-    def existed_params(var):
-        if not isinstance(var, fluid.framework.Parameter):
-            return False
-        if os.path.exists(os.path.join(pretraining_params_path, var.name)):
-            print("INIT {}".format(var.name))
-            return True
-        else:
-            print("SKIP {}".format(var.name))
-            return False
-
-    fluid.io.load_vars(
-        exe,
-        pretraining_params_path,
-        main_program=main_program,
-        predicate=existed_params)
+    fluid.load(
+        program=main_program, model_path=pretraining_params_path, executor=exe)
     print("Load pretraining parameters from {}.".format(
         pretraining_params_path))
 
diff --git a/PaddleNLP/PaddleLARK/ELMo/.run_ce.sh b/PaddleNLP/pretrain_langauge_models/ELMo/.run_ce.sh
similarity index 100%
rename from PaddleNLP/PaddleLARK/ELMo/.run_ce.sh
rename to PaddleNLP/pretrain_langauge_models/ELMo/.run_ce.sh
diff --git a/PaddleNLP/PaddleLARK/ELMo/LAC_demo/bilm.py b/PaddleNLP/pretrain_langauge_models/ELMo/LAC_demo/bilm.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/ELMo/LAC_demo/bilm.py
rename to PaddleNLP/pretrain_langauge_models/ELMo/LAC_demo/bilm.py
diff --git a/PaddleNLP/PaddleLARK/ELMo/LAC_demo/conf/q2b.dic b/PaddleNLP/pretrain_langauge_models/ELMo/LAC_demo/conf/q2b.dic
similarity index 100%
rename from PaddleNLP/PaddleLARK/ELMo/LAC_demo/conf/q2b.dic
rename to PaddleNLP/pretrain_langauge_models/ELMo/LAC_demo/conf/q2b.dic
diff --git a/PaddleNLP/PaddleLARK/ELMo/LAC_demo/data/dev/dev.tsv b/PaddleNLP/pretrain_langauge_models/ELMo/LAC_demo/data/dev/dev.tsv
similarity index 100%
rename from PaddleNLP/PaddleLARK/ELMo/LAC_demo/data/dev/dev.tsv
rename to PaddleNLP/pretrain_langauge_models/ELMo/LAC_demo/data/dev/dev.tsv
diff --git a/PaddleNLP/PaddleLARK/ELMo/LAC_demo/data/tag.dic b/PaddleNLP/pretrain_langauge_models/ELMo/LAC_demo/data/tag.dic
similarity index 100%
rename from PaddleNLP/PaddleLARK/ELMo/LAC_demo/data/tag.dic
rename to PaddleNLP/pretrain_langauge_models/ELMo/LAC_demo/data/tag.dic
diff --git a/PaddleNLP/PaddleLARK/ELMo/LAC_demo/data/train/train.tsv b/PaddleNLP/pretrain_langauge_models/ELMo/LAC_demo/data/train/train.tsv
similarity index 100%
rename from PaddleNLP/PaddleLARK/ELMo/LAC_demo/data/train/train.tsv
rename to PaddleNLP/pretrain_langauge_models/ELMo/LAC_demo/data/train/train.tsv
diff --git a/PaddleNLP/PaddleLARK/ELMo/LAC_demo/network.py b/PaddleNLP/pretrain_langauge_models/ELMo/LAC_demo/network.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/ELMo/LAC_demo/network.py
rename to PaddleNLP/pretrain_langauge_models/ELMo/LAC_demo/network.py
diff --git a/PaddleNLP/PaddleLARK/ELMo/LAC_demo/reader.py b/PaddleNLP/pretrain_langauge_models/ELMo/LAC_demo/reader.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/ELMo/LAC_demo/reader.py
rename to PaddleNLP/pretrain_langauge_models/ELMo/LAC_demo/reader.py
diff --git a/PaddleNLP/PaddleLARK/ELMo/LAC_demo/run.sh b/PaddleNLP/pretrain_langauge_models/ELMo/LAC_demo/run.sh
similarity index 100%
rename from PaddleNLP/PaddleLARK/ELMo/LAC_demo/run.sh
rename to PaddleNLP/pretrain_langauge_models/ELMo/LAC_demo/run.sh
diff --git a/PaddleNLP/PaddleLARK/ELMo/LAC_demo/train.py b/PaddleNLP/pretrain_langauge_models/ELMo/LAC_demo/train.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/ELMo/LAC_demo/train.py
rename to PaddleNLP/pretrain_langauge_models/ELMo/LAC_demo/train.py
diff --git a/PaddleNLP/PaddleLARK/ELMo/README.md b/PaddleNLP/pretrain_langauge_models/ELMo/README.md
similarity index 99%
rename from PaddleNLP/PaddleLARK/ELMo/README.md
rename to PaddleNLP/pretrain_langauge_models/ELMo/README.md
index edf79e2b509e9eb5c3faf85c281a5ab3680adec7..4f4b87b118fb46ec932676584ec17aad1c5bab45 100755
--- a/PaddleNLP/PaddleLARK/ELMo/README.md
+++ b/PaddleNLP/pretrain_langauge_models/ELMo/README.md
@@ -90,5 +90,3 @@ word_embedding=fluid.layers.concat(input=[elmo_embedding, word_embedding], axis=
 
 ### 参考论文
 [Deep contextualized word representations](https://arxiv.org/abs/1802.05365)
-
-
diff --git a/PaddleNLP/PaddleLARK/XLNet/model/__init__.py b/PaddleNLP/pretrain_langauge_models/ELMo/__init__.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/XLNet/model/__init__.py
rename to PaddleNLP/pretrain_langauge_models/ELMo/__init__.py
diff --git a/PaddleNLP/PaddleLARK/ELMo/_ce.py b/PaddleNLP/pretrain_langauge_models/ELMo/_ce.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/ELMo/_ce.py
rename to PaddleNLP/pretrain_langauge_models/ELMo/_ce.py
diff --git a/PaddleNLP/PaddleLARK/ELMo/args.py b/PaddleNLP/pretrain_langauge_models/ELMo/args.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/ELMo/args.py
rename to PaddleNLP/pretrain_langauge_models/ELMo/args.py
diff --git a/PaddleNLP/PaddleLARK/ELMo/data.py b/PaddleNLP/pretrain_langauge_models/ELMo/data.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/ELMo/data.py
rename to PaddleNLP/pretrain_langauge_models/ELMo/data.py
diff --git a/PaddleNLP/PaddleLARK/ELMo/data/dev/sentence_file.txt b/PaddleNLP/pretrain_langauge_models/ELMo/data/dev/sentence_file.txt
similarity index 100%
rename from PaddleNLP/PaddleLARK/ELMo/data/dev/sentence_file.txt
rename to PaddleNLP/pretrain_langauge_models/ELMo/data/dev/sentence_file.txt
diff --git a/PaddleNLP/PaddleLARK/ELMo/data/dev/sentence_file_2.txt b/PaddleNLP/pretrain_langauge_models/ELMo/data/dev/sentence_file_2.txt
similarity index 100%
rename from PaddleNLP/PaddleLARK/ELMo/data/dev/sentence_file_2.txt
rename to PaddleNLP/pretrain_langauge_models/ELMo/data/dev/sentence_file_2.txt
diff --git a/PaddleNLP/PaddleLARK/ELMo/data/train/sentence_file.txt b/PaddleNLP/pretrain_langauge_models/ELMo/data/train/sentence_file.txt
similarity index 100%
rename from PaddleNLP/PaddleLARK/ELMo/data/train/sentence_file.txt
rename to PaddleNLP/pretrain_langauge_models/ELMo/data/train/sentence_file.txt
diff --git a/PaddleNLP/PaddleLARK/ELMo/data/train/sentence_file_1.txt b/PaddleNLP/pretrain_langauge_models/ELMo/data/train/sentence_file_1.txt
similarity index 100%
rename from PaddleNLP/PaddleLARK/ELMo/data/train/sentence_file_1.txt
rename to PaddleNLP/pretrain_langauge_models/ELMo/data/train/sentence_file_1.txt
diff --git a/PaddleNLP/PaddleLARK/ELMo/data/vocabulary_min5k.txt b/PaddleNLP/pretrain_langauge_models/ELMo/data/vocabulary_min5k.txt
similarity index 100%
rename from PaddleNLP/PaddleLARK/ELMo/data/vocabulary_min5k.txt
rename to PaddleNLP/pretrain_langauge_models/ELMo/data/vocabulary_min5k.txt
diff --git a/PaddleNLP/PaddleLARK/ELMo/lm_model.py b/PaddleNLP/pretrain_langauge_models/ELMo/lm_model.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/ELMo/lm_model.py
rename to PaddleNLP/pretrain_langauge_models/ELMo/lm_model.py
diff --git a/PaddleNLP/PaddleLARK/ELMo/run.sh b/PaddleNLP/pretrain_langauge_models/ELMo/run.sh
similarity index 100%
rename from PaddleNLP/PaddleLARK/ELMo/run.sh
rename to PaddleNLP/pretrain_langauge_models/ELMo/run.sh
diff --git a/PaddleNLP/PaddleLARK/ELMo/train.py b/PaddleNLP/pretrain_langauge_models/ELMo/train.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/ELMo/train.py
rename to PaddleNLP/pretrain_langauge_models/ELMo/train.py
diff --git a/PaddleNLP/PaddleLARK/XLNet/reader/__init__.py b/PaddleNLP/pretrain_langauge_models/ELMo/utils/__init__.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/XLNet/reader/__init__.py
rename to PaddleNLP/pretrain_langauge_models/ELMo/utils/__init__.py
diff --git a/PaddleNLP/PaddleLARK/ELMo/utils/cards.py b/PaddleNLP/pretrain_langauge_models/ELMo/utils/cards.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/ELMo/utils/cards.py
rename to PaddleNLP/pretrain_langauge_models/ELMo/utils/cards.py
diff --git a/PaddleNLP/PaddleLARK/ELMo/utils/init.py b/PaddleNLP/pretrain_langauge_models/ELMo/utils/init.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/ELMo/utils/init.py
rename to PaddleNLP/pretrain_langauge_models/ELMo/utils/init.py
diff --git a/PaddleNLP/PaddleLARK/XLNet/.run_ce.sh b/PaddleNLP/pretrain_langauge_models/XLNet/.run_ce.sh
similarity index 100%
rename from PaddleNLP/PaddleLARK/XLNet/.run_ce.sh
rename to PaddleNLP/pretrain_langauge_models/XLNet/.run_ce.sh
diff --git a/PaddleNLP/PaddleLARK/XLNet/README.md b/PaddleNLP/pretrain_langauge_models/XLNet/README.md
similarity index 100%
rename from PaddleNLP/PaddleLARK/XLNet/README.md
rename to PaddleNLP/pretrain_langauge_models/XLNet/README.md
diff --git a/PaddleNLP/PaddleLARK/XLNet/README_cn.md b/PaddleNLP/pretrain_langauge_models/XLNet/README_cn.md
similarity index 100%
rename from PaddleNLP/PaddleLARK/XLNet/README_cn.md
rename to PaddleNLP/pretrain_langauge_models/XLNet/README_cn.md
diff --git a/PaddleNLP/PaddleLARK/XLNet/_ce.py b/PaddleNLP/pretrain_langauge_models/XLNet/_ce.py
similarity index 99%
rename from PaddleNLP/PaddleLARK/XLNet/_ce.py
rename to PaddleNLP/pretrain_langauge_models/XLNet/_ce.py
index 094434273c106f0b0862a8bc0e63f54f4f208289..f35f533c513fa21f335a65e0fea3648a9470e0d8 100644
--- a/PaddleNLP/PaddleLARK/XLNet/_ce.py
+++ b/PaddleNLP/pretrain_langauge_models/XLNet/_ce.py
@@ -7,7 +7,6 @@ from kpi import CostKpi, DurationKpi, AccKpi
 
 #### NOTE kpi.py should shared in models in some way!!!!
 
-
 train_duration_sts_b_card1 = DurationKpi(
     'train_duration_sts_b_card1', 0.01, 0, actived=True)
 train_cost_sts_b_card1 = CostKpi(
diff --git a/PaddleNLP/PaddleLARK/XLNet/classifier_utils.py b/PaddleNLP/pretrain_langauge_models/XLNet/classifier_utils.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/XLNet/classifier_utils.py
rename to PaddleNLP/pretrain_langauge_models/XLNet/classifier_utils.py
diff --git a/PaddleNLP/PaddleLARK/XLNet/data_utils.py b/PaddleNLP/pretrain_langauge_models/XLNet/data_utils.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/XLNet/data_utils.py
rename to PaddleNLP/pretrain_langauge_models/XLNet/data_utils.py
diff --git a/PaddleNLP/PaddleLARK/XLNet/utils/__init__.py b/PaddleNLP/pretrain_langauge_models/XLNet/model/__init__.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/XLNet/utils/__init__.py
rename to PaddleNLP/pretrain_langauge_models/XLNet/model/__init__.py
diff --git a/PaddleNLP/PaddleLARK/XLNet/model/classifier.py b/PaddleNLP/pretrain_langauge_models/XLNet/model/classifier.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/XLNet/model/classifier.py
rename to PaddleNLP/pretrain_langauge_models/XLNet/model/classifier.py
diff --git a/PaddleNLP/PaddleLARK/XLNet/model/xlnet.py b/PaddleNLP/pretrain_langauge_models/XLNet/model/xlnet.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/XLNet/model/xlnet.py
rename to PaddleNLP/pretrain_langauge_models/XLNet/model/xlnet.py
diff --git a/PaddleNLP/PaddleLARK/XLNet/modeling.py b/PaddleNLP/pretrain_langauge_models/XLNet/modeling.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/XLNet/modeling.py
rename to PaddleNLP/pretrain_langauge_models/XLNet/modeling.py
diff --git a/PaddleNLP/PaddleLARK/XLNet/optimization.py b/PaddleNLP/pretrain_langauge_models/XLNet/optimization.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/XLNet/optimization.py
rename to PaddleNLP/pretrain_langauge_models/XLNet/optimization.py
diff --git a/PaddleNLP/PaddleLARK/XLNet/prepro_utils.py b/PaddleNLP/pretrain_langauge_models/XLNet/prepro_utils.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/XLNet/prepro_utils.py
rename to PaddleNLP/pretrain_langauge_models/XLNet/prepro_utils.py
diff --git a/PaddleNLP/PaddleMT/transformer/__init__.py b/PaddleNLP/pretrain_langauge_models/XLNet/reader/__init__.py
similarity index 100%
rename from PaddleNLP/PaddleMT/transformer/__init__.py
rename to PaddleNLP/pretrain_langauge_models/XLNet/reader/__init__.py
diff --git a/PaddleNLP/PaddleLARK/XLNet/reader/cls.py b/PaddleNLP/pretrain_langauge_models/XLNet/reader/cls.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/XLNet/reader/cls.py
rename to PaddleNLP/pretrain_langauge_models/XLNet/reader/cls.py
diff --git a/PaddleNLP/PaddleLARK/XLNet/reader/squad.py b/PaddleNLP/pretrain_langauge_models/XLNet/reader/squad.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/XLNet/reader/squad.py
rename to PaddleNLP/pretrain_langauge_models/XLNet/reader/squad.py
diff --git a/PaddleNLP/PaddleLARK/XLNet/run_classifier.py b/PaddleNLP/pretrain_langauge_models/XLNet/run_classifier.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/XLNet/run_classifier.py
rename to PaddleNLP/pretrain_langauge_models/XLNet/run_classifier.py
diff --git a/PaddleNLP/PaddleLARK/XLNet/run_squad.py b/PaddleNLP/pretrain_langauge_models/XLNet/run_squad.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/XLNet/run_squad.py
rename to PaddleNLP/pretrain_langauge_models/XLNet/run_squad.py
diff --git a/PaddleNLP/PaddleLARK/XLNet/squad_utils.py b/PaddleNLP/pretrain_langauge_models/XLNet/squad_utils.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/XLNet/squad_utils.py
rename to PaddleNLP/pretrain_langauge_models/XLNet/squad_utils.py
diff --git a/PaddleNLP/PaddleMT/transformer/utils/__init__.py b/PaddleNLP/pretrain_langauge_models/XLNet/utils/__init__.py
similarity index 100%
rename from PaddleNLP/PaddleMT/transformer/utils/__init__.py
rename to PaddleNLP/pretrain_langauge_models/XLNet/utils/__init__.py
diff --git a/PaddleNLP/PaddleLARK/XLNet/utils/args.py b/PaddleNLP/pretrain_langauge_models/XLNet/utils/args.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/XLNet/utils/args.py
rename to PaddleNLP/pretrain_langauge_models/XLNet/utils/args.py
diff --git a/PaddleNLP/PaddleLARK/XLNet/utils/cards.py b/PaddleNLP/pretrain_langauge_models/XLNet/utils/cards.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/XLNet/utils/cards.py
rename to PaddleNLP/pretrain_langauge_models/XLNet/utils/cards.py
diff --git a/PaddleNLP/PaddleLARK/XLNet/utils/init.py b/PaddleNLP/pretrain_langauge_models/XLNet/utils/init.py
similarity index 100%
rename from PaddleNLP/PaddleLARK/XLNet/utils/init.py
rename to PaddleNLP/pretrain_langauge_models/XLNet/utils/init.py
diff --git a/PaddleNLP/sentiment_classification/README.md b/PaddleNLP/sentiment_classification/README.md
index 696e615f3fe90b31805c84d892af1a96c12539de..30a24cdfca18eacb4cbe40fe8a9b913ea83805ca 100644
--- a/PaddleNLP/sentiment_classification/README.md
+++ b/PaddleNLP/sentiment_classification/README.md
@@ -29,7 +29,7 @@
 
 1. PaddlePaddle 安装
 
-   本项目依赖于 PaddlePaddle Fluid 1.6 及以上版本,请参考 [安装指南](http://www.paddlepaddle.org/#quick-start) 进行安装
+   本项目依赖于 PaddlePaddle Fluid 1.7 及以上版本,请参考 [安装指南](http://www.paddlepaddle.org/#quick-start) 进行安装
 
 2. 代码安装
 
diff --git a/PaddleNLP/sentiment_classification/inference_model.py b/PaddleNLP/sentiment_classification/inference_model.py
index e7d9f5baaefc78b661d3db27d47fd7001c3b996a..7212baf6250d1f355e7a6cc3cbc54557ae8a13c2 100644
--- a/PaddleNLP/sentiment_classification/inference_model.py
+++ b/PaddleNLP/sentiment_classification/inference_model.py
@@ -13,6 +13,7 @@ from run_classifier import create_model
 import utils
 import reader
 
+
 def do_save_inference_model(args):
     if args.use_cuda:
         dev_count = fluid.core.get_cuda_device_count()
@@ -20,9 +21,9 @@ def do_save_inference_model(args):
     else:
         dev_count = int(os.environ.get('CPU_NUM', 1))
         place = fluid.CPUPlace()
-    
+
     exe = fluid.Executor(place)
-    
+
     test_prog = fluid.Program()
     startup_prog = fluid.Program()
 
@@ -36,7 +37,7 @@ def do_save_inference_model(args):
 
     test_prog = test_prog.clone(for_test=True)
     exe.run(startup_prog)
-    
+
     assert (args.init_checkpoint)
 
     if args.init_checkpoint:
@@ -53,6 +54,7 @@ def do_save_inference_model(args):
 
     print("save inference model at %s" % (args.inference_model_dir))
 
+
 def inference(exe, test_program, test_pyreader, fetch_list, infer_phrase):
     """
     Inference Function
@@ -61,13 +63,16 @@ def inference(exe, test_program, test_pyreader, fetch_list, infer_phrase):
     test_pyreader.start()
     while True:
         try:
-            np_props = exe.run(program=test_program, fetch_list=fetch_list, return_numpy=True)
+            np_props = exe.run(program=test_program,
+                               fetch_list=fetch_list,
+                               return_numpy=True)
             for probs in np_props[0]:
                 print("%d\t%f\t%f" % (np.argmax(probs), probs[0], probs[1]))
         except fluid.core.EOFException:
             test_pyreader.reset()
             break
 
+
 def test_inference_model(args):
     if args.use_cuda:
         dev_count = fluid.core.get_cuda_device_count()
@@ -75,7 +80,7 @@ def test_inference_model(args):
     else:
         dev_count = int(os.environ.get('CPU_NUM', 1))
         place = fluid.CPUPlace()
-    
+
     exe = fluid.Executor(place)
     test_prog = fluid.Program()
     startup_prog = fluid.Program()
@@ -92,7 +97,8 @@ def test_inference_model(args):
     exe = fluid.Executor(place)
     exe.run(startup_prog)
 
-    processor = reader.SentaProcessor(data_dir=args.data_dir,
+    processor = reader.SentaProcessor(
+        data_dir=args.data_dir,
         vocab_path=args.vocab_path,
         random_seed=args.random_seed,
         max_seq_len=args.max_seq_len)
@@ -107,14 +113,14 @@ def test_inference_model(args):
         params_filename="params.pdparams")
 
     infer_data_generator = processor.data_generator(
-        batch_size=args.batch_size,
+        batch_size=args.batch_size / dev_count,
         phase="infer",
         epoch=1,
         shuffle=False)
-    
-    infer_pyreader.decorate_sample_list_generator(infer_data_generator)
-    inference(exe, test_prog, infer_pyreader,
-        [probs.name], "infer")
+
+    infer_pyreader.set_sample_list_generator(infer_data_generator)
+    inference(exe, test_prog, infer_pyreader, [probs.name], "infer")
+
 
 if __name__ == "__main__":
     args = PDConfig('senta_config.json')
diff --git a/PaddleNLP/sentiment_classification/inference_model_ernie.py b/PaddleNLP/sentiment_classification/inference_model_ernie.py
index 1ea6ec5323c6b334a505c1c062d9d954c7f4f0d0..99d421eed0cd05b2cab43acbc162ef4ce161e277 100644
--- a/PaddleNLP/sentiment_classification/inference_model_ernie.py
+++ b/PaddleNLP/sentiment_classification/inference_model_ernie.py
@@ -1,8 +1,8 @@
 # -*- coding: utf_8 -*-
 import os
 import sys
-sys.path.append("../")
-sys.path.append("../models/classification")
+sys.path.append("../shared_modules/")
+sys.path.append("../shared_modules/models/classification")
 import paddle
 import paddle.fluid as fluid
 import numpy as np
@@ -17,6 +17,7 @@ from models.representation.ernie import ErnieConfig
 from models.representation.ernie import ernie_encoder, ernie_encoder_with_paddle_hub
 from preprocess.ernie import task_reader
 
+
 def do_save_inference_model(args):
 
     ernie_config = ErnieConfig(args.ernie_config_path)
@@ -28,30 +29,29 @@ def do_save_inference_model(args):
     else:
         dev_count = int(os.environ.get('CPU_NUM', 1))
         place = fluid.CPUPlace()
-    
+
     exe = fluid.Executor(place)
-    
+
     test_prog = fluid.Program()
     startup_prog = fluid.Program()
 
     with fluid.program_guard(test_prog, startup_prog):
         with fluid.unique_name.guard():
             infer_pyreader, ernie_inputs, labels = ernie_pyreader(
-                args,
-                pyreader_name="infer_reader")
-            
+                args, pyreader_name="infer_reader")
+
             if args.use_paddle_hub:
-                embeddings = ernie_encoder_with_paddle_hub(ernie_inputs, args.max_seq_len)
+                embeddings = ernie_encoder_with_paddle_hub(ernie_inputs,
+                                                           args.max_seq_len)
             else:
-                embeddings = ernie_encoder(ernie_inputs, ernie_config=ernie_config)
+                embeddings = ernie_encoder(
+                    ernie_inputs, ernie_config=ernie_config)
 
-            probs = create_model(args,
-                    embeddings,
-                    labels=labels,
-                    is_prediction=True)
+            probs = create_model(
+                args, embeddings, labels=labels, is_prediction=True)
     test_prog = test_prog.clone(for_test=True)
     exe.run(startup_prog)
-    
+
     assert (args.init_checkpoint)
 
     if args.init_checkpoint:
@@ -59,11 +59,11 @@ def do_save_inference_model(args):
 
     fluid.io.save_inference_model(
         args.inference_model_dir,
-        feeded_var_names=[ernie_inputs["src_ids"].name,
-                          ernie_inputs["sent_ids"].name,
-                          ernie_inputs["pos_ids"].name,
-                          ernie_inputs["input_mask"].name,
-                          ernie_inputs["seq_lens"].name],
+        feeded_var_names=[
+            ernie_inputs["src_ids"].name, ernie_inputs["sent_ids"].name,
+            ernie_inputs["pos_ids"].name, ernie_inputs["input_mask"].name,
+            ernie_inputs["seq_lens"].name
+        ],
         target_vars=[probs],
         executor=exe,
         main_program=test_prog,
@@ -72,6 +72,7 @@ def do_save_inference_model(args):
 
     print("save inference model at %s" % (args.inference_model_dir))
 
+
 def inference(exe, test_program, test_pyreader, fetch_list, infer_phrase):
     """
     Inference Function
@@ -80,13 +81,16 @@ def inference(exe, test_program, test_pyreader, fetch_list, infer_phrase):
     test_pyreader.start()
     while True:
         try:
-            np_props = exe.run(program=test_program, fetch_list=fetch_list, return_numpy=True)
+            np_props = exe.run(program=test_program,
+                               fetch_list=fetch_list,
+                               return_numpy=True)
             for probs in np_props[0]:
                 print("%d\t%f\t%f" % (np.argmax(probs), probs[0], probs[1]))
         except fluid.core.EOFException:
             test_pyreader.reset()
             break
 
+
 def test_inference_model(args):
     ernie_config = ErnieConfig(args.ernie_config_path)
     ernie_config.print_config()
@@ -97,9 +101,9 @@ def test_inference_model(args):
     else:
         dev_count = int(os.environ.get('CPU_NUM', 1))
         place = fluid.CPUPlace()
-    
+
     exe = fluid.Executor(place)
-    
+
     reader = task_reader.ClassifyReader(
         vocab_path=args.vocab_path,
         label_map_config=args.label_map_config,
@@ -113,15 +117,11 @@ def test_inference_model(args):
     with fluid.program_guard(test_prog, startup_prog):
         with fluid.unique_name.guard():
             infer_pyreader, ernie_inputs, labels = ernie_pyreader(
-                args,
-                pyreader_name="infer_pyreader")
+                args, pyreader_name="infer_pyreader")
 
             embeddings = ernie_encoder(ernie_inputs, ernie_config=ernie_config)
             probs = create_model(
-                args,
-                embeddings,
-                labels=labels,
-                is_prediction=True)
+                args, embeddings, labels=labels, is_prediction=True)
 
     test_prog = test_prog.clone(for_test=True)
     exe.run(startup_prog)
@@ -129,7 +129,7 @@ def test_inference_model(args):
     assert (args.inference_model_dir)
     infer_data_generator = reader.data_generator(
         input_file=args.test_set,
-        batch_size=args.batch_size,
+        batch_size=args.batch_size / dev_count,
         phase="infer",
         epoch=1,
         shuffle=False)
@@ -140,9 +140,9 @@ def test_inference_model(args):
         model_filename="model.pdmodel",
         params_filename="params.pdparams")
 
-    infer_pyreader.decorate_batch_generator(infer_data_generator)
-    inference(exe, test_prog, infer_pyreader,
-        [probs.name], "infer")
+    infer_pyreader.set_batch_generator(infer_data_generator)
+    inference(exe, test_prog, infer_pyreader, [probs.name], "infer")
+
 
 if __name__ == "__main__":
     args = PDConfig()
diff --git a/PaddleNLP/sentiment_classification/run_classifier.py b/PaddleNLP/sentiment_classification/run_classifier.py
index 3727380128dbb7c8753c21f8dca90b0089a0cbd1..93340a2926badff44cc0cacc114c73c082a0e9fa 100644
--- a/PaddleNLP/sentiment_classification/run_classifier.py
+++ b/PaddleNLP/sentiment_classification/run_classifier.py
@@ -12,8 +12,8 @@ import argparse
 import numpy as np
 import multiprocessing
 import sys
-sys.path.append("../models/classification/")
-sys.path.append("../")
+sys.path.append("../shared_modules/models/classification/")
+sys.path.append("../shared_modules/")
 
 from nets import bow_net
 from nets import lstm_net
@@ -30,24 +30,19 @@ import paddle.fluid as fluid
 import reader
 from utils import init_checkpoint
 
-def create_model(args,
-                 pyreader_name,
-                 num_labels,
-                 is_prediction=False):
 
+def create_model(args, pyreader_name, num_labels, is_prediction=False):
     """
     Create Model for sentiment classification
     """
-    
-    data = fluid.layers.data(
-            name="src_ids", shape=[-1, args.max_seq_len], dtype='int64')
-    label = fluid.layers.data(
-            name="label", shape=[-1, 1], dtype="int64")
-    seq_len = fluid.layers.data(
-            name="seq_len", shape=[-1], dtype="int64")
-    
-    data_reader = fluid.io.PyReader(feed_list=[data, label, seq_len], 
-        capacity=4, iterable=False)
+
+    data = fluid.data(
+        name="src_ids", shape=[None, args.max_seq_len], dtype='int64')
+    label = fluid.data(name="label", shape=[None, 1], dtype="int64")
+    seq_len = fluid.data(name="seq_len", shape=[None], dtype="int64")
+
+    data_reader = fluid.io.DataLoader.from_generator(
+        feed_list=[data, label, seq_len], capacity=4, iterable=False)
 
     if args.model_type == "bilstm_net":
         network = bilstm_net
@@ -63,18 +58,19 @@ def create_model(args,
         raise ValueError("Unknown network type!")
 
     if is_prediction:
-        probs = network(data, seq_len, None, args.vocab_size, is_prediction=is_prediction)
+        probs = network(
+            data, seq_len, None, args.vocab_size, is_prediction=is_prediction)
         print("create inference model...")
         return data_reader, probs, [data.name, seq_len.name]
 
-    ce_loss, probs = network(data, seq_len, label, args.vocab_size, is_prediction=is_prediction)
+    ce_loss, probs = network(
+        data, seq_len, label, args.vocab_size, is_prediction=is_prediction)
     loss = fluid.layers.mean(x=ce_loss)
     num_seqs = fluid.layers.create_tensor(dtype='int64')
     accuracy = fluid.layers.accuracy(input=probs, label=label, total=num_seqs)
     return data_reader, loss, accuracy, num_seqs
 
 
-
 def evaluate(exe, test_program, test_pyreader, fetch_list, eval_phase):
     """
     Evaluation Function
@@ -99,8 +95,8 @@ def evaluate(exe, test_program, test_pyreader, fetch_list, eval_phase):
             break
     time_end = time.time()
     print("[%s evaluation] ave loss: %f, ave acc: %f, elapsed time: %f s" %
-        (eval_phase, np.sum(total_cost) / np.sum(total_num_seqs),
-        np.sum(total_acc) / np.sum(total_num_seqs), time_end - time_begin))
+          (eval_phase, np.sum(total_cost) / np.sum(total_num_seqs),
+           np.sum(total_acc) / np.sum(total_num_seqs), time_end - time_begin))
 
 
 def inference(exe, test_program, test_pyreader, fetch_list, infer_phrase):
@@ -111,8 +107,9 @@ def inference(exe, test_program, test_pyreader, fetch_list, infer_phrase):
     time_begin = time.time()
     while True:
         try:
-            np_props = exe.run(program=test_program, fetch_list=fetch_list,
-                                return_numpy=True)
+            np_props = exe.run(program=test_program,
+                               fetch_list=fetch_list,
+                               return_numpy=True)
             for probs in np_props[0]:
                 print("%d\t%f\t%f" % (np.argmax(probs), probs[0], probs[1]))
         except fluid.core.EOFException:
@@ -135,10 +132,11 @@ def main(args):
     exe = fluid.Executor(place)
 
     task_name = args.task_name.lower()
-    processor = reader.SentaProcessor(data_dir=args.data_dir,
-                                      vocab_path=args.vocab_path,
-                                      random_seed=args.random_seed,
-                                      max_seq_len=args.max_seq_len)
+    processor = reader.SentaProcessor(
+        data_dir=args.data_dir,
+        vocab_path=args.vocab_path,
+        random_seed=args.random_seed,
+        max_seq_len=args.max_seq_len)
     num_labels = len(processor.get_labels())
 
     if not (args.do_train or args.do_val or args.do_infer):
@@ -151,7 +149,7 @@ def main(args):
 
     if args.do_train:
         train_data_generator = processor.data_generator(
-            batch_size=args.batch_size,
+            batch_size=args.batch_size / dev_count,
             phase='train',
             epoch=args.epoch,
             shuffle=True)
@@ -183,11 +181,11 @@ def main(args):
             lower_mem, upper_mem, unit = fluid.contrib.memory_usage(
                 program=train_program, batch_size=args.batch_size)
             print("Theoretical memory usage in training: %.3f - %.3f %s" %
-                (lower_mem, upper_mem, unit))
+                  (lower_mem, upper_mem, unit))
 
     if args.do_val:
         test_data_generator = processor.data_generator(
-            batch_size=args.batch_size,
+            batch_size=args.batch_size / dev_count,
             phase='dev',
             epoch=1,
             shuffle=False)
@@ -204,7 +202,7 @@ def main(args):
 
     if args.do_infer:
         infer_data_generator = processor.data_generator(
-            batch_size=args.batch_size,
+            batch_size=args.batch_size / dev_count,
             phase='infer',
             epoch=1,
             shuffle=False)
@@ -223,30 +221,25 @@ def main(args):
     if args.do_train:
         if args.init_checkpoint:
             init_checkpoint(
-                exe,
-                args.init_checkpoint,
-                main_program=startup_prog)
+                exe, args.init_checkpoint, main_program=startup_prog)
 
     elif args.do_val or args.do_infer:
         if not args.init_checkpoint:
             raise ValueError("args 'init_checkpoint' should be set if"
                              "only doing validation or testing!")
-        init_checkpoint(
-            exe,
-            args.init_checkpoint,
-            main_program=startup_prog)
+        init_checkpoint(exe, args.init_checkpoint, main_program=startup_prog)
 
     if args.do_train:
         train_exe = exe
-        train_reader.decorate_sample_list_generator(train_data_generator)
+        train_reader.set_sample_list_generator(train_data_generator)
     else:
         train_exe = None
     if args.do_val:
         test_exe = exe
-        test_reader.decorate_sample_list_generator(test_data_generator)
+        test_reader.set_sample_list_generator(test_data_generator)
     if args.do_infer:
         test_exe = exe
-        infer_reader.decorate_sample_list_generator(infer_data_generator)
+        infer_reader.set_sample_list_generator(infer_data_generator)
 
     if args.do_train:
         train_reader.start()
@@ -262,7 +255,9 @@ def main(args):
                 else:
                     fetch_list = []
 
-                outputs = train_exe.run(program=train_program, fetch_list=fetch_list, return_numpy=False)
+                outputs = train_exe.run(program=train_program,
+                                        fetch_list=fetch_list,
+                                        return_numpy=False)
                 #print("finished one step")
                 if steps % args.skip_steps == 0:
                     np_loss, np_acc, np_num_seqs = outputs
@@ -274,35 +269,37 @@ def main(args):
                     total_num_seqs.extend(np_num_seqs)
 
                     if args.verbose:
-                        verbose = "train pyreader queue size: %d, " % train_pyreader.queue.size()
+                        verbose = "train pyreader queue size: %d, " % train_pyreader.queue.size(
+                        )
                         print(verbose)
 
                     time_end = time.time()
                     used_time = time_end - time_begin
                     print("step: %d, ave loss: %f, "
-                        "ave acc: %f, speed: %f steps/s" %
-                        (steps, np.sum(total_cost) / np.sum(total_num_seqs),
-                        np.sum(total_acc) / np.sum(total_num_seqs),
-                        args.skip_steps / used_time))
+                          "ave acc: %f, speed: %f steps/s" %
+                          (steps, np.sum(total_cost) / np.sum(total_num_seqs),
+                           np.sum(total_acc) / np.sum(total_num_seqs),
+                           args.skip_steps / used_time))
                     total_cost, total_acc, total_num_seqs = [], [], []
                     time_begin = time.time()
 
                 if steps % args.save_steps == 0:
                     save_path = os.path.join(args.checkpoints,
-                                         "step_" + str(steps))
-                    fluid.io.save_persistables(exe, save_path, train_program)
+                                             "step_" + str(steps), "checkpoint")
+                    fluid.save(train_program, save_path)
 
                 if steps % args.validation_steps == 0:
                     # evaluate dev set
                     if args.do_val:
                         print("do evalatation")
                         evaluate(exe, test_prog, test_reader,
-                                [loss.name, accuracy.name, num_seqs.name],
-                                "dev")
+                                 [loss.name, accuracy.name, num_seqs.name],
+                                 "dev")
 
             except fluid.core.EOFException:
-                save_path = os.path.join(args.checkpoints, "step_" + str(steps))
-                fluid.io.save_persistables(exe, save_path, train_program)
+                save_path = os.path.join(args.checkpoints, "step_" + str(steps),
+                                         "checkpoint")
+                fluid.save(train_program, save_path)
                 train_reader.reset()
                 break
 
@@ -310,13 +307,12 @@ def main(args):
     if args.do_val:
         print("Final validation result:")
         evaluate(exe, test_prog, test_reader,
-            [loss.name, accuracy.name, num_seqs.name], "dev")
+                 [loss.name, accuracy.name, num_seqs.name], "dev")
 
     # final eval on test set
     if args.do_infer:
         print("Final test result:")
-        inference(exe, infer_prog, infer_reader,
-            [prop.name], "infer")
+        inference(exe, infer_prog, infer_reader, [prop.name], "infer")
 
 
 if __name__ == "__main__":
diff --git a/PaddleNLP/sentiment_classification/run_ernie_classifier.py b/PaddleNLP/sentiment_classification/run_ernie_classifier.py
index 0e89414d5ef85c08ba4e029b4870c77eba39f031..21ab742616d4cf62f63583df7ede742be87138ef 100644
--- a/PaddleNLP/sentiment_classification/run_ernie_classifier.py
+++ b/PaddleNLP/sentiment_classification/run_ernie_classifier.py
@@ -16,8 +16,8 @@ import sys
 import paddle
 import paddle.fluid as fluid
 
-sys.path.append("../models/classification/")
-sys.path.append("..")
+sys.path.append("../shared_modules/models/classification/")
+sys.path.append("../shared_modules/")
 print(sys.path)
 
 from nets import bow_net
@@ -36,40 +36,37 @@ from config import PDConfig
 
 from utils import init_checkpoint
 
+
 def ernie_pyreader(args, pyreader_name):
-    src_ids = fluid.layers.data(
-        name="src_ids", shape=[-1, args.max_seq_len, 1], dtype="int64")
-    sent_ids = fluid.layers.data(
-        name="sent_ids", shape=[-1, args.max_seq_len, 1], dtype="int64")
-    pos_ids = fluid.layers.data(
-        name="pos_ids", shape=[-1, args.max_seq_len, 1], dtype="int64")
-    input_mask = fluid.layers.data(
-        name="input_mask", shape=[-1, args.max_seq_len, 1], dtype="float32")
-    labels = fluid.layers.data(
-        name="labels", shape=[-1, 1], dtype="int64")
-    seq_lens = fluid.layers.data(
-        name="seq_lens", shape=[-1], dtype="int64")
-    
+    src_ids = fluid.data(
+        name="src_ids", shape=[None, args.max_seq_len, 1], dtype="int64")
+    sent_ids = fluid.data(
+        name="sent_ids", shape=[None, args.max_seq_len, 1], dtype="int64")
+    pos_ids = fluid.data(
+        name="pos_ids", shape=[None, args.max_seq_len, 1], dtype="int64")
+    input_mask = fluid.data(
+        name="input_mask", shape=[None, args.max_seq_len, 1], dtype="float32")
+    labels = fluid.data(name="labels", shape=[None, 1], dtype="int64")
+    seq_lens = fluid.data(name="seq_lens", shape=[None], dtype="int64")
+
     pyreader = fluid.io.DataLoader.from_generator(
         feed_list=[src_ids, sent_ids, pos_ids, input_mask, labels, seq_lens],
         capacity=50,
         iterable=False,
         use_double_buffer=True)
-    
+
     ernie_inputs = {
         "src_ids": src_ids,
         "sent_ids": sent_ids,
         "pos_ids": pos_ids,
         "input_mask": input_mask,
-        "seq_lens": seq_lens}
-    
+        "seq_lens": seq_lens
+    }
+
     return pyreader, ernie_inputs, labels
 
-def create_model(args,
-                 embeddings,
-                 labels,
-                 is_prediction=False):
 
+def create_model(args, embeddings, labels, is_prediction=False):
     """
     Create Model for sentiment classification based on ERNIE encoder
     """
@@ -78,11 +75,11 @@ def create_model(args,
 
     if args.model_type == "ernie_base":
         ce_loss, probs = ernie_base_net(sentence_embeddings, labels,
-            args.num_labels)
+                                        args.num_labels)
 
     elif args.model_type == "ernie_bilstm":
         ce_loss, probs = ernie_bilstm_net(token_embeddings, labels,
-            args.num_labels)
+                                          args.num_labels)
 
     else:
         raise ValueError("Unknown network type!")
@@ -120,8 +117,8 @@ def evaluate(exe, test_program, test_pyreader, fetch_list, eval_phase):
             break
     time_end = time.time()
     print("[%s evaluation] ave loss: %f, ave acc: %f, elapsed time: %f s" %
-        (eval_phase, np.sum(total_cost) / np.sum(total_num_seqs),
-        np.sum(total_acc) / np.sum(total_num_seqs), time_end - time_begin))
+          (eval_phase, np.sum(total_cost) / np.sum(total_num_seqs),
+           np.sum(total_acc) / np.sum(total_num_seqs), time_end - time_begin))
 
 
 def infer(exe, infer_program, infer_pyreader, fetch_list, infer_phase):
@@ -132,8 +129,9 @@ def infer(exe, infer_program, infer_pyreader, fetch_list, infer_phase):
     time_begin = time.time()
     while True:
         try:
-            batch_probs = exe.run(program=infer_program, fetch_list=fetch_list,
-                                return_numpy=True)
+            batch_probs = exe.run(program=infer_program,
+                                  fetch_list=fetch_list,
+                                  return_numpy=True)
             for probs in batch_probs[0]:
                 print("%d\t%f\t%f" % (np.argmax(probs), probs[0], probs[1]))
         except fluid.core.EOFException:
@@ -195,21 +193,19 @@ def main(args):
             with fluid.unique_name.guard():
                 # create ernie_pyreader
                 train_pyreader, ernie_inputs, labels = ernie_pyreader(
-                    args,
-                    pyreader_name='train_pyreader')
+                    args, pyreader_name='train_pyreader')
 
                 # get ernie_embeddings
                 if args.use_paddle_hub:
-                    embeddings = ernie_encoder_with_paddle_hub(ernie_inputs, args.max_seq_len)
+                    embeddings = ernie_encoder_with_paddle_hub(ernie_inputs,
+                                                               args.max_seq_len)
                 else:
-                    embeddings = ernie_encoder(ernie_inputs, ernie_config=ernie_config)
+                    embeddings = ernie_encoder(
+                        ernie_inputs, ernie_config=ernie_config)
 
                 # user defined model based on ernie embeddings
                 loss, accuracy, num_seqs = create_model(
-                args,
-                embeddings,
-                labels=labels,
-                is_prediction=False)
+                    args, embeddings, labels=labels, is_prediction=False)
 
                 optimizer = fluid.optimizer.Adam(learning_rate=args.lr)
                 optimizer.minimize(loss)
@@ -218,62 +214,59 @@ def main(args):
             lower_mem, upper_mem, unit = fluid.contrib.memory_usage(
                 program=train_program, batch_size=args.batch_size)
             print("Theoretical memory usage in training: %.3f - %.3f %s" %
-                (lower_mem, upper_mem, unit))
+                  (lower_mem, upper_mem, unit))
 
     if args.do_val:
         test_data_generator = reader.data_generator(
-                input_file=args.dev_set,
-                batch_size=args.batch_size,
-                phase='dev',
-                epoch=1,
-                shuffle=False)
+            input_file=args.dev_set,
+            batch_size=args.batch_size,
+            phase='dev',
+            epoch=1,
+            shuffle=False)
         test_prog = fluid.Program()
         with fluid.program_guard(test_prog, startup_prog):
             with fluid.unique_name.guard():
                 # create ernie_pyreader
                 test_pyreader, ernie_inputs, labels = ernie_pyreader(
-                    args,
-                    pyreader_name='eval_reader')
+                    args, pyreader_name='eval_reader')
 
                 # get ernie_embeddings
                 if args.use_paddle_hub:
-                    embeddings = ernie_encoder_with_paddle_hub(ernie_inputs, args.max_seq_len)
+                    embeddings = ernie_encoder_with_paddle_hub(ernie_inputs,
+                                                               args.max_seq_len)
                 else:
-                    embeddings = ernie_encoder(ernie_inputs, ernie_config=ernie_config)
+                    embeddings = ernie_encoder(
+                        ernie_inputs, ernie_config=ernie_config)
 
                 # user defined model based on ernie embeddings
                 loss, accuracy, num_seqs = create_model(
-                args,
-                embeddings,
-                labels=labels,
-                is_prediction=False)
+                    args, embeddings, labels=labels, is_prediction=False)
 
         test_prog = test_prog.clone(for_test=True)
 
     if args.do_infer:
         infer_data_generator = reader.data_generator(
-                input_file=args.test_set,
-                batch_size=args.batch_size,
-                phase='infer',
-                epoch=1,
-                shuffle=False)
+            input_file=args.test_set,
+            batch_size=args.batch_size,
+            phase='infer',
+            epoch=1,
+            shuffle=False)
         infer_prog = fluid.Program()
         with fluid.program_guard(infer_prog, startup_prog):
             with fluid.unique_name.guard():
                 infer_pyreader, ernie_inputs, labels = ernie_pyreader(
-                    args,
-                    pyreader_name="infer_pyreader")
+                    args, pyreader_name="infer_pyreader")
 
                 # get ernie_embeddings
                 if args.use_paddle_hub:
-                    embeddings = ernie_encoder_with_paddle_hub(ernie_inputs, args.max_seq_len)
+                    embeddings = ernie_encoder_with_paddle_hub(ernie_inputs,
+                                                               args.max_seq_len)
                 else:
-                    embeddings = ernie_encoder(ernie_inputs, ernie_config=ernie_config)
+                    embeddings = ernie_encoder(
+                        ernie_inputs, ernie_config=ernie_config)
 
-                probs = create_model(args,
-                                    embeddings,
-                                    labels=labels,
-                                    is_prediction=True)
+                probs = create_model(
+                    args, embeddings, labels=labels, is_prediction=True)
 
         infer_prog = infer_prog.clone(for_test=True)
 
@@ -282,25 +275,17 @@ def main(args):
     if args.do_train:
         if args.init_checkpoint:
             init_checkpoint(
-                exe,
-                args.init_checkpoint,
-                main_program=train_program)
+                exe, args.init_checkpoint, main_program=train_program)
     elif args.do_val:
         if not args.init_checkpoint:
             raise ValueError("args 'init_checkpoint' should be set if"
                              "only doing validation or testing!")
-        init_checkpoint(
-            exe,
-            args.init_checkpoint,
-            main_program=test_prog)
+        init_checkpoint(exe, args.init_checkpoint, main_program=test_prog)
     elif args.do_infer:
         if not args.init_checkpoint:
             raise ValueError("args 'init_checkpoint' should be set if"
                              "only doing validation or testing!")
-        init_checkpoint(
-            exe,
-            args.init_checkpoint,
-            main_program=infer_prog)
+        init_checkpoint(exe, args.init_checkpoint, main_program=infer_prog)
 
     if args.do_train:
         train_exe = exe
@@ -327,7 +312,9 @@ def main(args):
                 else:
                     fetch_list = []
 
-                outputs = train_exe.run(program=train_program, fetch_list=fetch_list, return_numpy=False)
+                outputs = train_exe.run(program=train_program,
+                                        fetch_list=fetch_list,
+                                        return_numpy=False)
                 if steps % args.skip_steps == 0:
                     np_loss, np_acc, np_num_seqs = outputs
                     np_loss = np.array(np_loss)
@@ -338,34 +325,36 @@ def main(args):
                     total_num_seqs.extend(np_num_seqs)
 
                     if args.verbose:
-                        verbose = "train pyreader queue size: %d, " % train_pyreader.queue.size()
+                        verbose = "train pyreader queue size: %d, " % train_pyreader.queue.size(
+                        )
                         print(verbose)
 
                     time_end = time.time()
                     used_time = time_end - time_begin
                     print("step: %d, ave loss: %f, "
-                        "ave acc: %f, speed: %f steps/s" %
-                        (steps, np.sum(total_cost) / np.sum(total_num_seqs),
-                        np.sum(total_acc) / np.sum(total_num_seqs),
-                        args.skip_steps / used_time))
+                          "ave acc: %f, speed: %f steps/s" %
+                          (steps, np.sum(total_cost) / np.sum(total_num_seqs),
+                           np.sum(total_acc) / np.sum(total_num_seqs),
+                           args.skip_steps / used_time))
                     total_cost, total_acc, total_num_seqs = [], [], []
                     time_begin = time.time()
 
                 if steps % args.save_steps == 0:
                     save_path = os.path.join(args.checkpoints,
-                                         "step_" + str(steps))
-                    fluid.io.save_persistables(exe, save_path, train_program)
+                                             "step_" + str(steps), "checkpoint")
+                    fluid.save(train_program, save_path)
 
                 if steps % args.validation_steps == 0:
                     # evaluate dev set
                     if args.do_val:
                         evaluate(exe, test_prog, test_pyreader,
-                                [loss.name, accuracy.name, num_seqs.name],
-                                "dev")
+                                 [loss.name, accuracy.name, num_seqs.name],
+                                 "dev")
 
             except fluid.core.EOFException:
-                save_path = os.path.join(args.checkpoints, "step_" + str(steps))
-                fluid.io.save_persistables(exe, save_path, train_program)
+                save_path = os.path.join(args.checkpoints, "step_" + str(steps),
+                                         "checkpoint")
+                fluid.save(train_program, save_path)
                 train_pyreader.reset()
                 break
 
@@ -373,13 +362,13 @@ def main(args):
     if args.do_val:
         print("Final validation result:")
         evaluate(exe, test_prog, test_pyreader,
-            [loss.name, accuracy.name, num_seqs.name], "dev")
+                 [loss.name, accuracy.name, num_seqs.name], "dev")
 
     # final eval on test set
     if args.do_infer:
         print("Final test result:")
-        infer(exe, infer_prog, infer_pyreader,
-            [probs.name], "infer")
+        infer(exe, infer_prog, infer_pyreader, [probs.name], "infer")
+
 
 if __name__ == "__main__":
     args = PDConfig()
diff --git a/PaddleNLP/sentiment_classification/utils.py b/PaddleNLP/sentiment_classification/utils.py
index 9a6d648a0bff7fa968b9d41e474f5e196f42ca10..fa94c3dd092699afe0a4f3329e4675bd0dbbf94e 100644
--- a/PaddleNLP/sentiment_classification/utils.py
+++ b/PaddleNLP/sentiment_classification/utils.py
@@ -31,6 +31,7 @@ class ArgumentGroup(object):
     """
     Argument Class
     """
+
     def __init__(self, parser, title, des):
         self._group = parser.add_argument_group(title=title, description=des)
 
@@ -63,23 +64,14 @@ def init_checkpoint(exe, init_checkpoint_path, main_program):
     """
     assert os.path.exists(
         init_checkpoint_path), "[%s] cann't be found." % init_checkpoint_path
-
-    def existed_persitables(var):
-        """
-        If existed presitabels
-        """
-        if not fluid.io.is_persistable(var):
-            return False
-        return os.path.exists(os.path.join(init_checkpoint_path, var.name))
-
-    fluid.io.load_vars(
-        exe,
-        init_checkpoint_path,
-        main_program=main_program,
-        predicate=existed_persitables)
+    try:
+        checkpoint_path = os.path.join(init_checkpoint_path, "checkpoint")
+        fluid.load(main_program, checkpoint_path, exe)
+    except:
+        fluid.load(main_program, init_checkpoint_path, exe)
     print("Load model from {}".format(init_checkpoint_path))
 
-    
+
 def data_reader(file_path, word_dict, num_examples, phrase, epoch, max_seq_len):
     """
     Convert word sequence into slot
@@ -96,8 +88,10 @@ def data_reader(file_path, word_dict, num_examples, phrase, epoch, max_seq_len):
                 sys.stderr.write("[NOTICE] Error Format Line!")
                 continue
             label = int(cols[1])
-            wids = [word_dict[x] if x in word_dict else unk_id
-                    for x in cols[0].split(" ")]
+            wids = [
+                word_dict[x] if x in word_dict else unk_id
+                for x in cols[0].split(" ")
+            ]
             seq_len = len(wids)
             if seq_len < max_seq_len:
                 for i in range(max_seq_len - seq_len):
@@ -111,7 +105,7 @@ def data_reader(file_path, word_dict, num_examples, phrase, epoch, max_seq_len):
         random.shuffle(all_data)
 
     num_examples[phrase] = len(all_data)
-        
+
     def reader():
         """
         Reader Function
@@ -119,8 +113,10 @@ def data_reader(file_path, word_dict, num_examples, phrase, epoch, max_seq_len):
         for epoch_index in range(epoch):
             for doc, label, seq_len in all_data:
                 yield doc, label, seq_len
+
     return reader
 
+
 def load_vocab(file_path):
     """
     load the given vocabulary
@@ -144,15 +140,6 @@ def init_pretraining_params(exe,
     assert os.path.exists(pretraining_params_path
                           ), "[%s] cann't be found." % pretraining_params_path
 
-    def _existed_params(var):
-        if not isinstance(var, fluid.framework.Parameter):
-            return False
-        return os.path.exists(os.path.join(pretraining_params_path, var.name))
-
-    fluid.io.load_vars(
-        exe,
-        pretraining_params_path,
-        main_program=main_program,
-        predicate=_existed_params)
+    fluid.load(main_program, pretraining_params_path, exe)
     print("Load pretraining parameters from {}.".format(
         pretraining_params_path))
diff --git a/PaddleNLP/PaddleTextGEN/seq2seq/README.md b/PaddleNLP/seq2seq/seq2seq/README.md
similarity index 96%
rename from PaddleNLP/PaddleTextGEN/seq2seq/README.md
rename to PaddleNLP/seq2seq/seq2seq/README.md
index ad0fa962f3e1be4aa26e81c56b7b2f5d41027025..6fb5c903da89eb88f05d90d5dc8a91dbd6bb6d1a 100644
--- a/PaddleNLP/PaddleTextGEN/seq2seq/README.md
+++ b/PaddleNLP/seq2seq/seq2seq/README.md
@@ -1,4 +1,4 @@
-运行本目录下的范例模型需要安装PaddlePaddle Fluid 1.6版。如果您的 PaddlePaddle 安装版本低于此要求,请按照[安装文档](https://www.paddlepaddle.org.cn/#quick-start)中的说明更新 PaddlePaddle 安装版本。
+运行本目录下的范例模型需要安装PaddlePaddle Fluid 1.7版本。如果您的 PaddlePaddle 安装版本低于此要求,请按照[安装文档](https://www.paddlepaddle.org.cn/#quick-start)中的说明更新 PaddlePaddle 安装版本。
 
 # Sequence to Sequence (Seq2Seq)
 
diff --git a/PaddleNLP/PaddleTextGEN/seq2seq/__init__.py b/PaddleNLP/seq2seq/seq2seq/__init__.py
similarity index 100%
rename from PaddleNLP/PaddleTextGEN/seq2seq/__init__.py
rename to PaddleNLP/seq2seq/seq2seq/__init__.py
diff --git a/PaddleNLP/PaddleTextGEN/seq2seq/args.py b/PaddleNLP/seq2seq/seq2seq/args.py
similarity index 100%
rename from PaddleNLP/PaddleTextGEN/seq2seq/args.py
rename to PaddleNLP/seq2seq/seq2seq/args.py
diff --git a/PaddleNLP/PaddleTextGEN/seq2seq/attention_model.py b/PaddleNLP/seq2seq/seq2seq/attention_model.py
similarity index 100%
rename from PaddleNLP/PaddleTextGEN/seq2seq/attention_model.py
rename to PaddleNLP/seq2seq/seq2seq/attention_model.py
diff --git a/PaddleNLP/PaddleTextGEN/seq2seq/base_model.py b/PaddleNLP/seq2seq/seq2seq/base_model.py
similarity index 100%
rename from PaddleNLP/PaddleTextGEN/seq2seq/base_model.py
rename to PaddleNLP/seq2seq/seq2seq/base_model.py
diff --git a/PaddleNLP/PaddleTextGEN/seq2seq/download.py b/PaddleNLP/seq2seq/seq2seq/download.py
similarity index 100%
rename from PaddleNLP/PaddleTextGEN/seq2seq/download.py
rename to PaddleNLP/seq2seq/seq2seq/download.py
diff --git a/PaddleNLP/PaddleTextGEN/seq2seq/infer.py b/PaddleNLP/seq2seq/seq2seq/infer.py
similarity index 97%
rename from PaddleNLP/PaddleTextGEN/seq2seq/infer.py
rename to PaddleNLP/seq2seq/seq2seq/infer.py
index 4724042905662c6044a1142865e3b653464afb7d..921710259b12a647316c04070b57410054b69445 100644
--- a/PaddleNLP/PaddleTextGEN/seq2seq/infer.py
+++ b/PaddleNLP/seq2seq/seq2seq/infer.py
@@ -93,7 +93,7 @@ def infer():
     # clone from default main program and use it as the validation program
     main_program = fluid.default_main_program()
     main_program = main_program.clone(for_test=True)
-    print([param.name for param in main_program.blocks[0].all_parameters()])
+    print([param.name for param in main_program.all_parameters()])
 
     place = fluid.CUDAPlace(0) if args.use_gpu else fluid.CPUPlace()
     exe = Executor(place)
@@ -127,7 +127,8 @@ def infer():
 
     dir_name = args.reload_model
     print("dir name", dir_name)
-    fluid.io.load_params(exe, dir_name)
+    dir_name = os.path.join(dir_name, "checkpoint")
+    fluid.load(main_program, dir_name, exe)
 
     train_data_iter = reader.get_data_iter(infer_data, 1, mode='eval')
 
diff --git a/PaddleNLP/PaddleTextGEN/seq2seq/infer.sh b/PaddleNLP/seq2seq/seq2seq/infer.sh
similarity index 100%
rename from PaddleNLP/PaddleTextGEN/seq2seq/infer.sh
rename to PaddleNLP/seq2seq/seq2seq/infer.sh
diff --git a/PaddleNLP/PaddleTextGEN/seq2seq/reader.py b/PaddleNLP/seq2seq/seq2seq/reader.py
similarity index 100%
rename from PaddleNLP/PaddleTextGEN/seq2seq/reader.py
rename to PaddleNLP/seq2seq/seq2seq/reader.py
diff --git a/PaddleNLP/PaddleTextGEN/seq2seq/run.sh b/PaddleNLP/seq2seq/seq2seq/run.sh
similarity index 100%
rename from PaddleNLP/PaddleTextGEN/seq2seq/run.sh
rename to PaddleNLP/seq2seq/seq2seq/run.sh
diff --git a/PaddleNLP/PaddleTextGEN/seq2seq/train.py b/PaddleNLP/seq2seq/seq2seq/train.py
similarity index 97%
rename from PaddleNLP/PaddleTextGEN/seq2seq/train.py
rename to PaddleNLP/seq2seq/seq2seq/train.py
index e44d9a47692d4a527afc09486a02119d4037ea65..33d8b51b71584f745ad2009d56f4b5ab5b0c9a41 100644
--- a/PaddleNLP/PaddleTextGEN/seq2seq/train.py
+++ b/PaddleNLP/seq2seq/seq2seq/train.py
@@ -214,7 +214,7 @@ def main():
                     ce_ppl.append(np.exp(total_loss / word_count))
                     total_loss = 0.0
                     word_count = 0.0
-                
+
                 # profiler tools
                 if args.profile and epoch_id == 0 and batch_id == 100:
                     profiler.reset_profiler()
@@ -229,10 +229,10 @@ def main():
                 % (epoch_id, epoch_time, sum(batch_times) / len(batch_times)))
 
             if not args.profile:
-                dir_name = os.path.join(args.model_path,
-                                        "epoch_" + str(epoch_id))
-                print("begin to save", dir_name)
-                fluid.io.save_params(exe, dir_name, main_program=train_program)
+                save_path = os.path.join(args.model_path,
+                                         "epoch_" + str(epoch_id), "checkpoint")
+                print("begin to save", save_path)
+                fluid.save(train_program, save_path)
                 print("save finished")
                 dev_ppl = eval(valid_data)
                 print("dev ppl", dev_ppl)
diff --git a/PaddleNLP/PaddleTextGEN/variational_seq2seq/README.md b/PaddleNLP/seq2seq/variational_seq2seq/README.md
similarity index 100%
rename from PaddleNLP/PaddleTextGEN/variational_seq2seq/README.md
rename to PaddleNLP/seq2seq/variational_seq2seq/README.md
diff --git a/PaddleNLP/PaddleTextGEN/variational_seq2seq/__init__.py b/PaddleNLP/seq2seq/variational_seq2seq/__init__.py
similarity index 100%
rename from PaddleNLP/PaddleTextGEN/variational_seq2seq/__init__.py
rename to PaddleNLP/seq2seq/variational_seq2seq/__init__.py
diff --git a/PaddleNLP/PaddleTextGEN/variational_seq2seq/args.py b/PaddleNLP/seq2seq/variational_seq2seq/args.py
similarity index 100%
rename from PaddleNLP/PaddleTextGEN/variational_seq2seq/args.py
rename to PaddleNLP/seq2seq/variational_seq2seq/args.py
diff --git a/PaddleNLP/PaddleTextGEN/variational_seq2seq/download.py b/PaddleNLP/seq2seq/variational_seq2seq/download.py
similarity index 100%
rename from PaddleNLP/PaddleTextGEN/variational_seq2seq/download.py
rename to PaddleNLP/seq2seq/variational_seq2seq/download.py
diff --git a/PaddleNLP/PaddleTextGEN/variational_seq2seq/infer.py b/PaddleNLP/seq2seq/variational_seq2seq/infer.py
similarity index 97%
rename from PaddleNLP/PaddleTextGEN/variational_seq2seq/infer.py
rename to PaddleNLP/seq2seq/variational_seq2seq/infer.py
index c21fff3a5e13c3853c70988c458d12a18aef964c..2044602050fc5803cee8ef126ee37dc1641de856 100644
--- a/PaddleNLP/PaddleTextGEN/variational_seq2seq/infer.py
+++ b/PaddleNLP/seq2seq/variational_seq2seq/infer.py
@@ -88,7 +88,8 @@ def infer():
 
     dir_name = args.reload_model
     print("dir name", dir_name)
-    fluid.io.load_params(exe, dir_name)
+    dir_name = os.path.join(dir_name, "checkpoint")
+    fluid.load(main_program, dir_name, exe)
     vocab, tar_id2vocab = get_vocab(args.dataset_prefix)
     infer_output = np.ones((batch_size, 1), dtype='int64') * BOS_ID
 
diff --git a/PaddleNLP/PaddleTextGEN/variational_seq2seq/infer.sh b/PaddleNLP/seq2seq/variational_seq2seq/infer.sh
similarity index 100%
rename from PaddleNLP/PaddleTextGEN/variational_seq2seq/infer.sh
rename to PaddleNLP/seq2seq/variational_seq2seq/infer.sh
diff --git a/PaddleNLP/PaddleTextGEN/variational_seq2seq/model.py b/PaddleNLP/seq2seq/variational_seq2seq/model.py
similarity index 100%
rename from PaddleNLP/PaddleTextGEN/variational_seq2seq/model.py
rename to PaddleNLP/seq2seq/variational_seq2seq/model.py
diff --git a/PaddleNLP/PaddleTextGEN/variational_seq2seq/reader.py b/PaddleNLP/seq2seq/variational_seq2seq/reader.py
similarity index 100%
rename from PaddleNLP/PaddleTextGEN/variational_seq2seq/reader.py
rename to PaddleNLP/seq2seq/variational_seq2seq/reader.py
diff --git a/PaddleNLP/PaddleTextGEN/variational_seq2seq/run.sh b/PaddleNLP/seq2seq/variational_seq2seq/run.sh
similarity index 100%
rename from PaddleNLP/PaddleTextGEN/variational_seq2seq/run.sh
rename to PaddleNLP/seq2seq/variational_seq2seq/run.sh
diff --git a/PaddleNLP/PaddleTextGEN/variational_seq2seq/train.py b/PaddleNLP/seq2seq/variational_seq2seq/train.py
similarity index 97%
rename from PaddleNLP/PaddleTextGEN/variational_seq2seq/train.py
rename to PaddleNLP/seq2seq/variational_seq2seq/train.py
index 98515a8329fba8508b01accbbed940ef2df65842..ae2973fb03b4d9aa7e617a8f839616d4a4f569a6 100644
--- a/PaddleNLP/PaddleTextGEN/variational_seq2seq/train.py
+++ b/PaddleNLP/seq2seq/variational_seq2seq/train.py
@@ -255,10 +255,11 @@ def main():
                 best_nll = test_nll
                 best_ppl = test_ppl
                 best_epoch_id = epoch_id
-                dir_name = os.path.join(args.model_path,
-                                        "epoch_" + str(best_epoch_id))
-                print("save model {}".format(dir_name))
-                fluid.io.save_params(exe, dir_name, main_program)
+                save_path = os.path.join(args.model_path,
+                                         "epoch_" + str(best_epoch_id),
+                                         "checkpoint")
+                print("save model {}".format(save_path))
+                fluid.save(main_program, save_path)
             else:
                 steps_not_improved += 1
                 if steps_not_improved == decay_ts:
diff --git a/PaddleNLP/models/__init__.py b/PaddleNLP/shared_modules/__init__.py
similarity index 100%
rename from PaddleNLP/models/__init__.py
rename to PaddleNLP/shared_modules/__init__.py
diff --git a/PaddleNLP/models/classification/__init__.py b/PaddleNLP/shared_modules/models/__init__.py
similarity index 100%
rename from PaddleNLP/models/classification/__init__.py
rename to PaddleNLP/shared_modules/models/__init__.py
diff --git a/PaddleNLP/models/language_model/__init__.py b/PaddleNLP/shared_modules/models/classification/__init__.py
similarity index 100%
rename from PaddleNLP/models/language_model/__init__.py
rename to PaddleNLP/shared_modules/models/classification/__init__.py
diff --git a/PaddleNLP/models/classification/nets.py b/PaddleNLP/shared_modules/models/classification/nets.py
similarity index 99%
rename from PaddleNLP/models/classification/nets.py
rename to PaddleNLP/shared_modules/models/classification/nets.py
index c66b3927972472d1bba60c49dac9dc5f70795634..d05f006823e143dff2e241a768785130882f6e79 100644
--- a/PaddleNLP/models/classification/nets.py
+++ b/PaddleNLP/shared_modules/models/classification/nets.py
@@ -4,6 +4,7 @@ This module provide nets for text classification
 
 import paddle.fluid as fluid
 
+
 def bow_net(data,
             seq_len,
             label,
diff --git a/PaddleNLP/models/matching/__init__.py b/PaddleNLP/shared_modules/models/language_model/__init__.py
similarity index 100%
rename from PaddleNLP/models/matching/__init__.py
rename to PaddleNLP/shared_modules/models/language_model/__init__.py
diff --git a/PaddleNLP/models/language_model/lm_model.py b/PaddleNLP/shared_modules/models/language_model/lm_model.py
similarity index 100%
rename from PaddleNLP/models/language_model/lm_model.py
rename to PaddleNLP/shared_modules/models/language_model/lm_model.py
diff --git a/PaddleNLP/models/matching/losses/__init__.py b/PaddleNLP/shared_modules/models/matching/__init__.py
similarity index 100%
rename from PaddleNLP/models/matching/losses/__init__.py
rename to PaddleNLP/shared_modules/models/matching/__init__.py
diff --git a/PaddleNLP/models/matching/bow.py b/PaddleNLP/shared_modules/models/matching/bow.py
similarity index 100%
rename from PaddleNLP/models/matching/bow.py
rename to PaddleNLP/shared_modules/models/matching/bow.py
diff --git a/PaddleNLP/models/matching/cnn.py b/PaddleNLP/shared_modules/models/matching/cnn.py
similarity index 94%
rename from PaddleNLP/models/matching/cnn.py
rename to PaddleNLP/shared_modules/models/matching/cnn.py
index 3e31292cabfb9ea5a340b7941bcc2ef3885d3202..f78b5bee511ba107ad2ae7819768186364f20142 100644
--- a/PaddleNLP/models/matching/cnn.py
+++ b/PaddleNLP/shared_modules/models/matching/cnn.py
@@ -43,8 +43,8 @@ class CNN(object):
         left_emb = emb_layer.ops(left)
         right_emb = emb_layer.ops(right)
         # Presentation context
-        cnn_layer = layers.SequenceConvPoolLayer(
-            self.filter_size, self.num_filters, "conv")
+        cnn_layer = layers.SequenceConvPoolLayer(self.filter_size,
+                                                 self.num_filters, "conv")
         left_cnn = cnn_layer.ops(left_emb)
         right_cnn = cnn_layer.ops(right_emb)
         # matching layer
diff --git a/PaddleNLP/models/matching/gru.py b/PaddleNLP/shared_modules/models/matching/gru.py
similarity index 100%
rename from PaddleNLP/models/matching/gru.py
rename to PaddleNLP/shared_modules/models/matching/gru.py
diff --git a/PaddleNLP/models/matching/optimizers/__init__.py b/PaddleNLP/shared_modules/models/matching/losses/__init__.py
similarity index 100%
rename from PaddleNLP/models/matching/optimizers/__init__.py
rename to PaddleNLP/shared_modules/models/matching/losses/__init__.py
diff --git a/PaddleNLP/models/matching/losses/hinge_loss.py b/PaddleNLP/shared_modules/models/matching/losses/hinge_loss.py
similarity index 100%
rename from PaddleNLP/models/matching/losses/hinge_loss.py
rename to PaddleNLP/shared_modules/models/matching/losses/hinge_loss.py
diff --git a/PaddleNLP/models/matching/losses/log_loss.py b/PaddleNLP/shared_modules/models/matching/losses/log_loss.py
similarity index 100%
rename from PaddleNLP/models/matching/losses/log_loss.py
rename to PaddleNLP/shared_modules/models/matching/losses/log_loss.py
diff --git a/PaddleNLP/models/matching/losses/softmax_cross_entropy_loss.py b/PaddleNLP/shared_modules/models/matching/losses/softmax_cross_entropy_loss.py
similarity index 100%
rename from PaddleNLP/models/matching/losses/softmax_cross_entropy_loss.py
rename to PaddleNLP/shared_modules/models/matching/losses/softmax_cross_entropy_loss.py
diff --git a/PaddleNLP/models/matching/lstm.py b/PaddleNLP/shared_modules/models/matching/lstm.py
similarity index 100%
rename from PaddleNLP/models/matching/lstm.py
rename to PaddleNLP/shared_modules/models/matching/lstm.py
diff --git a/PaddleNLP/models/matching/mm_dnn.py b/PaddleNLP/shared_modules/models/matching/mm_dnn.py
similarity index 100%
rename from PaddleNLP/models/matching/mm_dnn.py
rename to PaddleNLP/shared_modules/models/matching/mm_dnn.py
diff --git a/PaddleNLP/models/neural_machine_translation/transformer/__init__.py b/PaddleNLP/shared_modules/models/matching/optimizers/__init__.py
similarity index 100%
rename from PaddleNLP/models/neural_machine_translation/transformer/__init__.py
rename to PaddleNLP/shared_modules/models/matching/optimizers/__init__.py
diff --git a/PaddleNLP/models/matching/optimizers/paddle_optimizers.py b/PaddleNLP/shared_modules/models/matching/optimizers/paddle_optimizers.py
similarity index 100%
rename from PaddleNLP/models/matching/optimizers/paddle_optimizers.py
rename to PaddleNLP/shared_modules/models/matching/optimizers/paddle_optimizers.py
diff --git a/PaddleNLP/models/matching/paddle_layers.py b/PaddleNLP/shared_modules/models/matching/paddle_layers.py
similarity index 100%
rename from PaddleNLP/models/matching/paddle_layers.py
rename to PaddleNLP/shared_modules/models/matching/paddle_layers.py
diff --git a/PaddleNLP/models/model_check.py b/PaddleNLP/shared_modules/models/model_check.py
similarity index 85%
rename from PaddleNLP/models/model_check.py
rename to PaddleNLP/shared_modules/models/model_check.py
index 51713452a7f0b1019c7b8b7d37d24e0c5f15c77c..2aafb31d85c4de54c57447e52415ea3214ce4bd5 100644
--- a/PaddleNLP/models/model_check.py
+++ b/PaddleNLP/shared_modules/models/model_check.py
@@ -33,20 +33,21 @@ def check_cuda(use_cuda, err = \
     except Exception as e:
         pass
 
+
 def check_version():
-        """
+    """
         Log error and exit when the installed version of paddlepaddle is
         not satisfied.
         """
-        err = "PaddlePaddle version 1.6 or higher is required, " \
-            "or a suitable develop version is satisfied as well. \n" \
-            "Please make sure the version is good with your code." \
+    err = "PaddlePaddle version 1.6 or higher is required, " \
+        "or a suitable develop version is satisfied as well. \n" \
+        "Please make sure the version is good with your code." \
 
-        try:
-            fluid.require_version('1.6.0')
-        except Exception as e:
-            print(err)
-            sys.exit(1)
+    try:
+        fluid.require_version('1.6.0')
+    except Exception as e:
+        print(err)
+        sys.exit(1)
 
 
 def check_version():
diff --git a/PaddleNLP/models/reading_comprehension/__init__.py b/PaddleNLP/shared_modules/models/neural_machine_translation/transformer/__init__.py
similarity index 100%
rename from PaddleNLP/models/reading_comprehension/__init__.py
rename to PaddleNLP/shared_modules/models/neural_machine_translation/transformer/__init__.py
diff --git a/PaddleNLP/models/neural_machine_translation/transformer/desc.py b/PaddleNLP/shared_modules/models/neural_machine_translation/transformer/desc.py
similarity index 100%
rename from PaddleNLP/models/neural_machine_translation/transformer/desc.py
rename to PaddleNLP/shared_modules/models/neural_machine_translation/transformer/desc.py
diff --git a/PaddleNLP/models/neural_machine_translation/transformer/model.py b/PaddleNLP/shared_modules/models/neural_machine_translation/transformer/model.py
similarity index 100%
rename from PaddleNLP/models/neural_machine_translation/transformer/model.py
rename to PaddleNLP/shared_modules/models/neural_machine_translation/transformer/model.py
diff --git a/PaddleNLP/models/representation/__init__.py b/PaddleNLP/shared_modules/models/reading_comprehension/__init__.py
similarity index 100%
rename from PaddleNLP/models/representation/__init__.py
rename to PaddleNLP/shared_modules/models/reading_comprehension/__init__.py
diff --git a/PaddleNLP/models/reading_comprehension/bidaf_model.py b/PaddleNLP/shared_modules/models/reading_comprehension/bidaf_model.py
similarity index 100%
rename from PaddleNLP/models/reading_comprehension/bidaf_model.py
rename to PaddleNLP/shared_modules/models/reading_comprehension/bidaf_model.py
diff --git a/PaddleNLP/models/sequence_labeling/__init__.py b/PaddleNLP/shared_modules/models/representation/__init__.py
similarity index 100%
rename from PaddleNLP/models/sequence_labeling/__init__.py
rename to PaddleNLP/shared_modules/models/representation/__init__.py
diff --git a/PaddleNLP/models/representation/ernie.py b/PaddleNLP/shared_modules/models/representation/ernie.py
similarity index 96%
rename from PaddleNLP/models/representation/ernie.py
rename to PaddleNLP/shared_modules/models/representation/ernie.py
index a12c483f0c132c0bf29f89175dbaddef6d6a64b8..a00c9cb0fefd7536567db89eb331a5613f6da01d 100644
--- a/PaddleNLP/models/representation/ernie.py
+++ b/PaddleNLP/shared_modules/models/representation/ernie.py
@@ -30,10 +30,14 @@ from models.transformer_encoder import encoder, pre_process_layer
 
 def ernie_pyreader(args, pyreader_name):
     """define standard ernie pyreader"""
-    src_ids = fluid.data(name='1', shape=[-1, args.max_seq_len, 1], dtype='int64')
-    sent_ids = fluid.data(name='2', shape=[-1, args.max_seq_len, 1], dtype='int64')
-    pos_ids = fluid.data(name='3', shape=[-1, args.max_seq_len, 1], dtype='int64')
-    input_mask = fluid.data(name='4', shape=[-1, args.max_seq_len, 1], dtype='float32')
+    src_ids = fluid.data(
+        name='1', shape=[-1, args.max_seq_len, 1], dtype='int64')
+    sent_ids = fluid.data(
+        name='2', shape=[-1, args.max_seq_len, 1], dtype='int64')
+    pos_ids = fluid.data(
+        name='3', shape=[-1, args.max_seq_len, 1], dtype='int64')
+    input_mask = fluid.data(
+        name='4', shape=[-1, args.max_seq_len, 1], dtype='float32')
     labels = fluid.data(name='5', shape=[-1, 1], dtype='int64')
     seq_lens = fluid.data(name='6', shape=[-1], dtype='int64')
 
diff --git a/PaddleNLP/preprocess/__init__.py b/PaddleNLP/shared_modules/models/sequence_labeling/__init__.py
similarity index 100%
rename from PaddleNLP/preprocess/__init__.py
rename to PaddleNLP/shared_modules/models/sequence_labeling/__init__.py
diff --git a/PaddleNLP/models/sequence_labeling/nets.py b/PaddleNLP/shared_modules/models/sequence_labeling/nets.py
similarity index 100%
rename from PaddleNLP/models/sequence_labeling/nets.py
rename to PaddleNLP/shared_modules/models/sequence_labeling/nets.py
diff --git a/PaddleNLP/models/transformer_encoder.py b/PaddleNLP/shared_modules/models/transformer_encoder.py
similarity index 100%
rename from PaddleNLP/models/transformer_encoder.py
rename to PaddleNLP/shared_modules/models/transformer_encoder.py
diff --git a/PaddleNLP/preprocess/ernie/__init__.py b/PaddleNLP/shared_modules/preprocess/__init__.py
similarity index 100%
rename from PaddleNLP/preprocess/ernie/__init__.py
rename to PaddleNLP/shared_modules/preprocess/__init__.py
diff --git a/PaddleNLP/preprocess/tokenizer/conf/customization.dic b/PaddleNLP/shared_modules/preprocess/ernie/__init__.py
similarity index 100%
rename from PaddleNLP/preprocess/tokenizer/conf/customization.dic
rename to PaddleNLP/shared_modules/preprocess/ernie/__init__.py
diff --git a/PaddleNLP/preprocess/ernie/task_reader.py b/PaddleNLP/shared_modules/preprocess/ernie/task_reader.py
similarity index 99%
rename from PaddleNLP/preprocess/ernie/task_reader.py
rename to PaddleNLP/shared_modules/preprocess/ernie/task_reader.py
index b3a8a0d790eb8ae592167b129b2c707ba2318b6f..38a6e56df56923387f78ebeb9652df5398cb34a5 100644
--- a/PaddleNLP/preprocess/ernie/task_reader.py
+++ b/PaddleNLP/shared_modules/preprocess/ernie/task_reader.py
@@ -29,6 +29,7 @@ from preprocess.ernie import tokenization
 from preprocess.padding import pad_batch_data
 import io
 
+
 def csv_reader(fd, delimiter='\t'):
     def gen():
         for i in fd:
@@ -37,8 +38,10 @@ def csv_reader(fd, delimiter='\t'):
                 yield slots,
             else:
                 yield slots
+
     return gen()
 
+
 class BaseReader(object):
     """BaseReader for classify and sequence labeling task"""
 
diff --git a/PaddleNLP/preprocess/ernie/tokenization.py b/PaddleNLP/shared_modules/preprocess/ernie/tokenization.py
similarity index 99%
rename from PaddleNLP/preprocess/ernie/tokenization.py
rename to PaddleNLP/shared_modules/preprocess/ernie/tokenization.py
index 2a06a5818243e4d71aae93fdd1af86c6b14a66b8..08570f30fe9e6a8036a15095e67e6e8dd8686c14 100644
--- a/PaddleNLP/preprocess/ernie/tokenization.py
+++ b/PaddleNLP/shared_modules/preprocess/ernie/tokenization.py
@@ -23,6 +23,7 @@ import unicodedata
 import six
 import io
 
+
 def convert_to_unicode(text):
     """Converts `text` to Unicode (if it's not already), assuming utf-8 input."""
     if six.PY3:
diff --git a/PaddleNLP/preprocess/padding.py b/PaddleNLP/shared_modules/preprocess/padding.py
similarity index 100%
rename from PaddleNLP/preprocess/padding.py
rename to PaddleNLP/shared_modules/preprocess/padding.py
diff --git a/PaddleNLP/preprocess/tokenizer/README b/PaddleNLP/shared_modules/preprocess/tokenizer/README
similarity index 100%
rename from PaddleNLP/preprocess/tokenizer/README
rename to PaddleNLP/shared_modules/preprocess/tokenizer/README
diff --git a/PaddleNLP/shared_modules/preprocess/tokenizer/conf/customization.dic b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/customization.dic
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/PaddleNLP/preprocess/tokenizer/conf/customization.dic.example b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/customization.dic.example
similarity index 100%
rename from PaddleNLP/preprocess/tokenizer/conf/customization.dic.example
rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/customization.dic.example
diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/__model__ b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/__model__
similarity index 100%
rename from PaddleNLP/preprocess/tokenizer/conf/model/__model__
rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/__model__
diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/crfw b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/crfw
similarity index 100%
rename from PaddleNLP/preprocess/tokenizer/conf/model/crfw
rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/crfw
diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/fc_0.b_0 b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_0.b_0
similarity index 100%
rename from PaddleNLP/preprocess/tokenizer/conf/model/fc_0.b_0
rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_0.b_0
diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/fc_0.w_0 b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_0.w_0
similarity index 100%
rename from PaddleNLP/preprocess/tokenizer/conf/model/fc_0.w_0
rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_0.w_0
diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/fc_1.b_0 b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_1.b_0
similarity index 100%
rename from PaddleNLP/preprocess/tokenizer/conf/model/fc_1.b_0
rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_1.b_0
diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/fc_1.w_0 b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_1.w_0
similarity index 100%
rename from PaddleNLP/preprocess/tokenizer/conf/model/fc_1.w_0
rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_1.w_0
diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/fc_2.b_0 b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_2.b_0
similarity index 100%
rename from PaddleNLP/preprocess/tokenizer/conf/model/fc_2.b_0
rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_2.b_0
diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/fc_2.w_0 b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_2.w_0
similarity index 100%
rename from PaddleNLP/preprocess/tokenizer/conf/model/fc_2.w_0
rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_2.w_0
diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/fc_3.b_0 b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_3.b_0
similarity index 100%
rename from PaddleNLP/preprocess/tokenizer/conf/model/fc_3.b_0
rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_3.b_0
diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/fc_3.w_0 b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_3.w_0
similarity index 100%
rename from PaddleNLP/preprocess/tokenizer/conf/model/fc_3.w_0
rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_3.w_0
diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/fc_4.b_0 b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_4.b_0
similarity index 100%
rename from PaddleNLP/preprocess/tokenizer/conf/model/fc_4.b_0
rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_4.b_0
diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/fc_4.w_0 b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_4.w_0
similarity index 100%
rename from PaddleNLP/preprocess/tokenizer/conf/model/fc_4.w_0
rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_4.w_0
diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/gru_0.b_0 b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/gru_0.b_0
similarity index 100%
rename from PaddleNLP/preprocess/tokenizer/conf/model/gru_0.b_0
rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/gru_0.b_0
diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/gru_0.w_0 b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/gru_0.w_0
similarity index 100%
rename from PaddleNLP/preprocess/tokenizer/conf/model/gru_0.w_0
rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/gru_0.w_0
diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/gru_1.b_0 b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/gru_1.b_0
similarity index 100%
rename from PaddleNLP/preprocess/tokenizer/conf/model/gru_1.b_0
rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/gru_1.b_0
diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/gru_1.w_0 b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/gru_1.w_0
similarity index 100%
rename from PaddleNLP/preprocess/tokenizer/conf/model/gru_1.w_0
rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/gru_1.w_0
diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/gru_2.b_0 b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/gru_2.b_0
similarity index 100%
rename from PaddleNLP/preprocess/tokenizer/conf/model/gru_2.b_0
rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/gru_2.b_0
diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/gru_2.w_0 b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/gru_2.w_0
similarity index 100%
rename from PaddleNLP/preprocess/tokenizer/conf/model/gru_2.w_0
rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/gru_2.w_0
diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/gru_3.b_0 b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/gru_3.b_0
similarity index 100%
rename from PaddleNLP/preprocess/tokenizer/conf/model/gru_3.b_0
rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/gru_3.b_0
diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/gru_3.w_0 b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/gru_3.w_0
similarity index 100%
rename from PaddleNLP/preprocess/tokenizer/conf/model/gru_3.w_0
rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/gru_3.w_0
diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/word_emb b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/word_emb
similarity index 100%
rename from PaddleNLP/preprocess/tokenizer/conf/model/word_emb
rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/word_emb
diff --git a/PaddleNLP/preprocess/tokenizer/conf/q2b.dic b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/q2b.dic
similarity index 100%
rename from PaddleNLP/preprocess/tokenizer/conf/q2b.dic
rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/q2b.dic
diff --git a/PaddleNLP/preprocess/tokenizer/conf/strong_punc.dic b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/strong_punc.dic
similarity index 100%
rename from PaddleNLP/preprocess/tokenizer/conf/strong_punc.dic
rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/strong_punc.dic
diff --git a/PaddleNLP/preprocess/tokenizer/conf/tag.dic b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/tag.dic
similarity index 100%
rename from PaddleNLP/preprocess/tokenizer/conf/tag.dic
rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/tag.dic
diff --git a/PaddleNLP/preprocess/tokenizer/conf/word.dic b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/word.dic
similarity index 100%
rename from PaddleNLP/preprocess/tokenizer/conf/word.dic
rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/word.dic
diff --git a/PaddleNLP/preprocess/tokenizer/reader.py b/PaddleNLP/shared_modules/preprocess/tokenizer/reader.py
similarity index 100%
rename from PaddleNLP/preprocess/tokenizer/reader.py
rename to PaddleNLP/shared_modules/preprocess/tokenizer/reader.py
diff --git a/PaddleNLP/preprocess/tokenizer/test.txt.utf8 b/PaddleNLP/shared_modules/preprocess/tokenizer/test.txt.utf8
similarity index 100%
rename from PaddleNLP/preprocess/tokenizer/test.txt.utf8
rename to PaddleNLP/shared_modules/preprocess/tokenizer/test.txt.utf8
diff --git a/PaddleNLP/preprocess/tokenizer/tokenizer.py b/PaddleNLP/shared_modules/preprocess/tokenizer/tokenizer.py
similarity index 100%
rename from PaddleNLP/preprocess/tokenizer/tokenizer.py
rename to PaddleNLP/shared_modules/preprocess/tokenizer/tokenizer.py
diff --git a/PaddleNLP/similarity_net/run_classifier.py b/PaddleNLP/similarity_net/run_classifier.py
index 944bb1117bde232cdb7b6631428376832a0937ad..7272ce1e0374ddef96b5fe21408487e578596797 100644
--- a/PaddleNLP/similarity_net/run_classifier.py
+++ b/PaddleNLP/similarity_net/run_classifier.py
@@ -30,7 +30,7 @@ if sys.getdefaultencoding() != defaultencoding:
     reload(sys)
     sys.setdefaultencoding(defaultencoding)
 
-sys.path.append("..")
+sys.path.append("../shared_modules/")
 
 import paddle
 import paddle.fluid as fluid
@@ -47,18 +47,18 @@ from models.model_check import check_version
 from models.model_check import check_cuda
 
 
-def create_model(args, pyreader_name, is_inference = False, is_pointwise = False):
+def create_model(args, pyreader_name, is_inference=False, is_pointwise=False):
     """
     Create Model for simnet
     """
     if is_inference:
         inf_pyreader = fluid.layers.py_reader(
-        capacity=16,
-        shapes=([-1,1], [-1,1]),
-        dtypes=('int64', 'int64'),
-        lod_levels=(1, 1),
-        name=pyreader_name,
-        use_double_buffer=False)
+            capacity=16,
+            shapes=([-1], [-1]),
+            dtypes=('int64', 'int64'),
+            lod_levels=(1, 1),
+            name=pyreader_name,
+            use_double_buffer=False)
 
         left, pos_right = fluid.layers.read_file(inf_pyreader)
         return inf_pyreader, left, pos_right
@@ -66,28 +66,30 @@ def create_model(args, pyreader_name, is_inference = False, is_pointwise = False
     else:
         if is_pointwise:
             pointwise_pyreader = fluid.layers.py_reader(
-            capacity=16,
-            shapes=([-1,1], [-1,1], [-1,1]),
-            dtypes=('int64', 'int64', 'int64'),
-            lod_levels=(1, 1, 0),
-            name=pyreader_name,
-            use_double_buffer=False)
+                capacity=16,
+                shapes=([-1], [-1], [-1]),
+                dtypes=('int64', 'int64', 'int64'),
+                lod_levels=(1, 1, 0),
+                name=pyreader_name,
+                use_double_buffer=False)
 
             left, right, label = fluid.layers.read_file(pointwise_pyreader)
             return pointwise_pyreader, left, right, label
 
         else:
             pairwise_pyreader = fluid.layers.py_reader(
-            capacity=16,
-            shapes=([-1,1], [-1,1], [-1,1]),
-            dtypes=('int64', 'int64', 'int64'),
-            lod_levels=(1, 1, 1),
-            name=pyreader_name,
-            use_double_buffer=False)
-
-            left, pos_right, neg_right = fluid.layers.read_file(pairwise_pyreader)
+                capacity=16,
+                shapes=([-1], [-1], [-1]),
+                dtypes=('int64', 'int64', 'int64'),
+                lod_levels=(1, 1, 1),
+                name=pyreader_name,
+                use_double_buffer=False)
+
+            left, pos_right, neg_right = fluid.layers.read_file(
+                pairwise_pyreader)
             return pairwise_pyreader, left, pos_right, neg_right
-        
+
+
 def train(conf_dict, args):
     """
     train processic
@@ -97,16 +99,16 @@ def train(conf_dict, args):
     # get vocab size
     conf_dict['dict_size'] = len(vocab)
     # Load network structure dynamically
-    net = utils.import_class("../models/matching",
+    net = utils.import_class("../shared_modules/models/matching",
                              conf_dict["net"]["module_name"],
                              conf_dict["net"]["class_name"])(conf_dict)
     # Load loss function dynamically
-    loss = utils.import_class("../models/matching/losses",
+    loss = utils.import_class("../shared_modules/models/matching/losses",
                               conf_dict["loss"]["module_name"],
                               conf_dict["loss"]["class_name"])(conf_dict)
     # Load Optimization method
     optimizer = utils.import_class(
-        "../models/matching/optimizers", "paddle_optimizers",
+        "../shared_modules/models/matching/optimizers", "paddle_optimizers",
         conf_dict["optimizer"]["class_name"])(conf_dict)
     # load auc method
     metric = fluid.metrics.Auc(name="auc")
@@ -131,22 +133,23 @@ def train(conf_dict, args):
         with fluid.program_guard(train_program, startup_prog):
             with fluid.unique_name.guard():
                 train_pyreader, left, pos_right, neg_right = create_model(
-                    args, 
-                    pyreader_name='train_reader')
+                    args, pyreader_name='train_reader')
                 left_feat, pos_score = net.predict(left, pos_right)
                 pred = pos_score
                 _, neg_score = net.predict(left, neg_right)
                 avg_cost = loss.compute(pos_score, neg_score)
                 avg_cost.persistable = True
                 optimizer.ops(avg_cost)
-                
+
         # Get Reader
-        get_train_examples = simnet_process.get_reader("train",epoch=args.epoch)
+        get_train_examples = simnet_process.get_reader(
+            "train", epoch=args.epoch)
         if args.do_valid:
             test_prog = fluid.Program()
             with fluid.program_guard(test_prog, startup_prog):
                 with fluid.unique_name.guard():
-                    test_pyreader, left, pos_right= create_model(args, pyreader_name = 'test_reader',is_inference=True)
+                    test_pyreader, left, pos_right = create_model(
+                        args, pyreader_name='test_reader', is_inference=True)
                     left_feat, pos_score = net.predict(left, pos_right)
                     pred = pos_score
             test_prog = test_prog.clone(for_test=True)
@@ -156,40 +159,41 @@ def train(conf_dict, args):
         with fluid.program_guard(train_program, startup_prog):
             with fluid.unique_name.guard():
                 train_pyreader, left, right, label = create_model(
-                    args, 
-                    pyreader_name='train_reader',
-                    is_pointwise=True)
+                    args, pyreader_name='train_reader', is_pointwise=True)
                 left_feat, pred = net.predict(left, right)
                 avg_cost = loss.compute(pred, label)
                 avg_cost.persistable = True
                 optimizer.ops(avg_cost)
 
         # Get Feeder and Reader
-        get_train_examples = simnet_process.get_reader("train",epoch=args.epoch)
+        get_train_examples = simnet_process.get_reader(
+            "train", epoch=args.epoch)
         if args.do_valid:
             test_prog = fluid.Program()
             with fluid.program_guard(test_prog, startup_prog):
                 with fluid.unique_name.guard():
-                    test_pyreader, left, right= create_model(args, pyreader_name = 'test_reader',is_inference=True)
+                    test_pyreader, left, right = create_model(
+                        args, pyreader_name='test_reader', is_inference=True)
                     left_feat, pred = net.predict(left, right)
             test_prog = test_prog.clone(for_test=True)
 
     if args.init_checkpoint is not "":
-        utils.init_checkpoint(exe, args.init_checkpoint, 
-                              startup_prog)
+        utils.init_checkpoint(exe, args.init_checkpoint, startup_prog)
 
-    def valid_and_test(test_program, test_pyreader, get_valid_examples, process, mode, exe, fetch_list):
+    def valid_and_test(test_program, test_pyreader, get_valid_examples, process,
+                       mode, exe, fetch_list):
         """
         return auc and acc
         """
         # Get Batch Data
-        batch_data = fluid.io.batch(get_valid_examples, args.batch_size, drop_last=False)
+        batch_data = fluid.io.batch(
+            get_valid_examples, args.batch_size, drop_last=False)
         test_pyreader.decorate_paddle_reader(batch_data)
         test_pyreader.start()
         pred_list = []
         while True:
             try:
-                _pred = exe.run(program=test_program,fetch_list=[pred.name])
+                _pred = exe.run(program=test_program, fetch_list=[pred.name])
                 pred_list += list(_pred)
             except fluid.core.EOFException:
                 test_pyreader.reset()
@@ -222,11 +226,12 @@ def train(conf_dict, args):
     #for epoch_id in range(args.epoch):
     # used for continuous evaluation
     if args.enable_ce:
-        train_batch_data = fluid.io.batch(get_train_examples, args.batch_size, drop_last=False)
+        train_batch_data = fluid.io.batch(
+            get_train_examples, args.batch_size, drop_last=False)
     else:
         train_batch_data = fluid.io.batch(
             fluid.io.shuffle(
-               get_train_examples, buf_size=10000),
+                get_train_examples, buf_size=10000),
             args.batch_size,
             drop_last=False)
     train_pyreader.decorate_paddle_reader(train_batch_data)
@@ -238,25 +243,29 @@ def train(conf_dict, args):
         try:
             global_step += 1
             fetch_list = [avg_cost.name]
-            avg_loss = train_exe.run(program=train_program, fetch_list = fetch_list)
+            avg_loss = train_exe.run(program=train_program,
+                                     fetch_list=fetch_list)
             losses.append(np.mean(avg_loss[0]))
             if args.do_valid and global_step % args.validation_steps == 0:
                 get_valid_examples = simnet_process.get_reader("valid")
-                valid_result = valid_and_test(test_prog,test_pyreader,get_valid_examples,simnet_process,"valid",exe,[pred.name])
+                valid_result = valid_and_test(
+                    test_prog, test_pyreader, get_valid_examples,
+                    simnet_process, "valid", exe, [pred.name])
                 if args.compute_accuracy:
                     valid_auc, valid_acc = valid_result
                     logging.info(
-                        "global_steps: %d, valid_auc: %f, valid_acc: %f, valid_loss: %f" %
-                        (global_step, valid_auc, valid_acc, np.mean(losses)))
+                        "global_steps: %d, valid_auc: %f, valid_acc: %f, valid_loss: %f"
+                        % (global_step, valid_auc, valid_acc, np.mean(losses)))
                 else:
                     valid_auc = valid_result
-                    logging.info("global_steps: %d, valid_auc: %f, valid_loss: %f" %
-                                (global_step, valid_auc, np.mean(losses)))
+                    logging.info(
+                        "global_steps: %d, valid_auc: %f, valid_loss: %f" %
+                        (global_step, valid_auc, np.mean(losses)))
             if global_step % args.save_steps == 0:
                 model_save_dir = os.path.join(args.output_dir,
-                                            conf_dict["model_path"])
+                                              conf_dict["model_path"])
                 model_path = os.path.join(model_save_dir, str(global_step))
-                    
+
                 if not os.path.exists(model_save_dir):
                     os.makedirs(model_save_dir)
                 if args.task_mode == "pairwise":
@@ -269,21 +278,19 @@ def train(conf_dict, args):
                     ]
                     target_vars = [left_feat, pred]
                 fluid.io.save_inference_model(model_path, feed_var_names,
-                                            target_vars, exe,
-                                            test_prog)
+                                              target_vars, exe, test_prog)
                 logging.info("saving infer model in %s" % model_path)
-        
+
         except fluid.core.EOFException:
             train_pyreader.reset()
             break
     end_time = time.time()
     #logging.info("epoch: %d, loss: %f, used time: %d sec" %
-                #(epoch_id, np.mean(losses), end_time - start_time))
+    #(epoch_id, np.mean(losses), end_time - start_time))
     ce_info.append([np.mean(losses), end_time - start_time])
     #final save
-    logging.info("the final step is %s" % global_step)    
-    model_save_dir = os.path.join(args.output_dir,
-                                conf_dict["model_path"])
+    logging.info("the final step is %s" % global_step)
+    model_save_dir = os.path.join(args.output_dir, conf_dict["model_path"])
     model_path = os.path.join(model_save_dir, str(global_step))
     if not os.path.exists(model_save_dir):
         os.makedirs(model_save_dir)
@@ -296,9 +303,8 @@ def train(conf_dict, args):
             right.name,
         ]
         target_vars = [left_feat, pred]
-    fluid.io.save_inference_model(model_path, feed_var_names,
-                                target_vars, exe,
-                                test_prog)
+    fluid.io.save_inference_model(model_path, feed_var_names, target_vars, exe,
+                                  test_prog)
     logging.info("saving infer model in %s" % model_path)
     # used for continuous evaluation
     if args.enable_ce:
@@ -322,7 +328,9 @@ def train(conf_dict, args):
         else:
             # Get Feeder and Reader
             get_test_examples = simnet_process.get_reader("test")
-        test_result = valid_and_test(test_prog,test_pyreader,get_test_examples,simnet_process,"test",exe,[pred.name])
+        test_result = valid_and_test(test_prog, test_pyreader,
+                                     get_test_examples, simnet_process, "test",
+                                     exe, [pred.name])
         if args.compute_accuracy:
             test_auc, test_acc = test_result
             logging.info("AUC of test is %f, Accuracy of test is %f" %
@@ -344,16 +352,17 @@ def test(conf_dict, args):
 
     vocab = utils.load_vocab(args.vocab_path)
     simnet_process = reader.SimNetProcessor(args, vocab)
-    
+
     startup_prog = fluid.Program()
 
     get_test_examples = simnet_process.get_reader("test")
-    batch_data = fluid.io.batch(get_test_examples, args.batch_size, drop_last=False)
+    batch_data = fluid.io.batch(
+        get_test_examples, args.batch_size, drop_last=False)
     test_prog = fluid.Program()
 
     conf_dict['dict_size'] = len(vocab)
 
-    net = utils.import_class("../models/matching",
+    net = utils.import_class("../shared_modules/models/matching",
                              conf_dict["net"]["module_name"],
                              conf_dict["net"]["class_name"])(conf_dict)
 
@@ -364,9 +373,7 @@ def test(conf_dict, args):
             with fluid.program_guard(test_prog, startup_prog):
                 with fluid.unique_name.guard():
                     test_pyreader, left, pos_right = create_model(
-                        args,
-                        pyreader_name = 'test_reader',
-                        is_inference=True)
+                        args, pyreader_name='test_reader', is_inference=True)
                     left_feat, pos_score = net.predict(left, pos_right)
                     pred = pos_score
             test_prog = test_prog.clone(for_test=True)
@@ -375,19 +382,14 @@ def test(conf_dict, args):
             with fluid.program_guard(test_prog, startup_prog):
                 with fluid.unique_name.guard():
                     test_pyreader, left, right = create_model(
-                        args,
-                        pyreader_name = 'test_reader',
-                        is_inference=True)
+                        args, pyreader_name='test_reader', is_inference=True)
                     left_feat, pred = net.predict(left, right)
             test_prog = test_prog.clone(for_test=True)
 
         exe.run(startup_prog)
 
-        utils.init_checkpoint(
-            exe,
-            args.init_checkpoint,
-            main_program=test_prog)
-        
+        utils.init_checkpoint(exe, args.init_checkpoint, main_program=test_prog)
+
         test_exe = exe
         test_pyreader.decorate_paddle_reader(batch_data)
 
@@ -398,15 +400,18 @@ def test(conf_dict, args):
         output = []
         while True:
             try:
-                output = test_exe.run(program=test_prog,fetch_list=fetch_list)
+                output = test_exe.run(program=test_prog, fetch_list=fetch_list)
                 if args.task_mode == "pairwise":
-                    pred_list += list(map(lambda item: float(item[0]), output[0]))
+                    pred_list += list(
+                        map(lambda item: float(item[0]), output[0]))
                     predictions_file.write(u"\n".join(
-                        map(lambda item: str((item[0] + 1) / 2), output[0])) + "\n")
+                        map(lambda item: str((item[0] + 1) / 2), output[0])) +
+                                           "\n")
                 else:
                     pred_list += map(lambda item: item, output[0])
                     predictions_file.write(u"\n".join(
-                        map(lambda item: str(np.argmax(item)), output[0])) + "\n")
+                        map(lambda item: str(np.argmax(item)), output[0])) +
+                                           "\n")
             except fluid.core.EOFException:
                 test_pyreader.reset()
                 break
@@ -450,37 +455,37 @@ def infer(conf_dict, args):
     startup_prog = fluid.Program()
 
     get_infer_examples = simnet_process.get_infer_reader
-    batch_data = fluid.io.batch(get_infer_examples, args.batch_size, drop_last=False)
+    batch_data = fluid.io.batch(
+        get_infer_examples, args.batch_size, drop_last=False)
 
     test_prog = fluid.Program()
 
     conf_dict['dict_size'] = len(vocab)
 
-    net = utils.import_class("../models/matching",
+    net = utils.import_class("../shared_modules/models/matching",
                              conf_dict["net"]["module_name"],
                              conf_dict["net"]["class_name"])(conf_dict)
 
     if args.task_mode == "pairwise":
         with fluid.program_guard(test_prog, startup_prog):
             with fluid.unique_name.guard():
-                infer_pyreader, left, pos_right = create_model(args, pyreader_name = 'infer_reader', is_inference = True)
+                infer_pyreader, left, pos_right = create_model(
+                    args, pyreader_name='infer_reader', is_inference=True)
                 left_feat, pos_score = net.predict(left, pos_right)
                 pred = pos_score
         test_prog = test_prog.clone(for_test=True)
     else:
         with fluid.program_guard(test_prog, startup_prog):
             with fluid.unique_name.guard():
-                infer_pyreader, left, right = create_model(args, pyreader_name = 'infer_reader', is_inference = True)
+                infer_pyreader, left, right = create_model(
+                    args, pyreader_name='infer_reader', is_inference=True)
                 left_feat, pred = net.predict(left, right)
         test_prog = test_prog.clone(for_test=True)
 
     exe.run(startup_prog)
 
-    utils.init_checkpoint(
-        exe,
-        args.init_checkpoint,
-        main_program=test_prog)
-    
+    utils.init_checkpoint(exe, args.init_checkpoint, main_program=test_prog)
+
     test_exe = exe
     infer_pyreader.decorate_sample_list_generator(batch_data)
 
@@ -490,16 +495,16 @@ def infer(conf_dict, args):
     output = []
     infer_pyreader.start()
     while True:
-            try:
-                output = test_exe.run(program=test_prog,fetch_list=fetch_list)
-                if args.task_mode == "pairwise":
-                    preds_list += list(
-                        map(lambda item: str((item[0] + 1) / 2), output[0]))
-                else:
-                    preds_list += map(lambda item: str(np.argmax(item)), output[0])
-            except fluid.core.EOFException:
-                infer_pyreader.reset()
-                break
+        try:
+            output = test_exe.run(program=test_prog, fetch_list=fetch_list)
+            if args.task_mode == "pairwise":
+                preds_list += list(
+                    map(lambda item: str((item[0] + 1) / 2), output[0]))
+            else:
+                preds_list += map(lambda item: str(np.argmax(item)), output[0])
+        except fluid.core.EOFException:
+            infer_pyreader.reset()
+            break
     with io.open(args.infer_result_path, "w", encoding="utf8") as infer_file:
         for _data, _pred in zip(simnet_process.get_infer_data(), preds_list):
             infer_file.write(_data + "\t" + _pred + "\n")
@@ -514,6 +519,7 @@ def get_cards():
         num = len(cards.split(","))
     return num
 
+
 if __name__ == "__main__":
 
     args = ArgConfig()
diff --git a/README.md b/README.md
index 4701c4dac40421bac64de174725e7c172b975985..d6e6bd403bb014feb85d5092f803b5141abd745c 100644
--- a/README.md
+++ b/README.md
@@ -149,7 +149,7 @@ PaddlePaddle 提供了丰富的计算单元,使得用户可以采用模块化
 [**PaddleNLP**](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP) 是基于 PaddlePaddle 深度学习框架开发的自然语言处理 (NLP) 工具,算法,模型和数据的开源项目。百度在 NLP 领域十几年的深厚积淀为 PaddleNLP 提供了强大的核心动力。使用 PaddleNLP,您可以得到:
 
 - **丰富而全面的 NLP 任务支持:**
-  - PaddleNLP 为您提供了多粒度,多场景的应用支持。涵盖了从[分词](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/lexical_analysis),[词性标注](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/lexical_analysis),[命名实体识别](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/lexical_analysis)等 NLP 基础技术,到[文本分类](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/sentiment_classification),[文本相似度计算](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/similarity_net),[语义表示](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleLARK),[文本生成](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleTextGEN)等 NLP 核心技术。同时,PaddleNLP 还提供了针对常见 NLP 大型应用系统(如[阅读理解](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleMRC),[对话系统](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleDialogue),[机器翻译系统](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleMT)等)的特定核心技术和工具组件,模型和预训练参数等,让您在 NLP 领域畅通无阻。
+  - PaddleNLP 为您提供了多粒度,多场景的应用支持。涵盖了从[分词](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/lexical_analysis),[词性标注](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/lexical_analysis),[命名实体识别](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/lexical_analysis)等 NLP 基础技术,到[文本分类](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/sentiment_classification),[文本相似度计算](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/similarity_net),[语义表示](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/pretrain_langauge_models),[文本生成](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/seq2seq)等 NLP 核心技术。同时,PaddleNLP 还提供了针对常见 NLP 大型应用系统(如[阅读理解](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/machine_reading_comprehension),[对话系统](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/dialogue_system),[机器翻译系统](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/machine_translation)等)的特定核心技术和工具组件,模型和预训练参数等,让您在 NLP 领域畅通无阻。
 - **稳定可靠的 NLP 模型和强大的预训练参数:**
   - PaddleNLP集成了百度内部广泛使用的 NLP 工具模型,为您提供了稳定可靠的 NLP 算法解决方案。基于百亿级数据的预训练参数和丰富的预训练模型,助您轻松提高模型效果,为您的 NLP 业务注入强大动力。
 - **持续改进和技术支持,零基础搭建 NLP 应用:**
@@ -167,14 +167,14 @@ PaddlePaddle 提供了丰富的计算单元,使得用户可以采用模块化
 
 #### 语义表示
 
-[PaddleLARK](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleLARK) (Paddle LAngauge Representation ToolKit) 是传统语言模型的进一步发展,通过在大规模语料上训练得到的通用的语义表示模型,可以助益其他自然语言处理任务,是通用预训练 + 特定任务精调范式的体现。PaddleLARK 集成了 ELMO,BERT,ERNIE 1.0,ERNIE 2.0,XLNet 等热门中英文预训练模型。
+[pretrain_langauge_models](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/pretrain_langauge_models) (Paddle LAngauge Representation ToolKit) 是传统语言模型的进一步发展,通过在大规模语料上训练得到的通用的语义表示模型,可以助益其他自然语言处理任务,是通用预训练 + 特定任务精调范式的体现。pretrain_langauge_models 集成了 ELMO,BERT,ERNIE 1.0,ERNIE 2.0,XLNet 等热门中英文预训练模型。
 
 | 模型                                                         | 简介                                                         |
 | ------------------------------------------------------------ | ------------------------------------------------------------ |
 | [ERNIE](https://github.com/PaddlePaddle/ERNIE)(Enhanced Representation from kNowledge IntEgration) | 百度自研的语义表示模型,通过建模海量数据中的词、实体及实体关系,学习真实世界的语义知识。相较于 BERT 学习原始语言信号,ERNIE 直接对先验语义知识单元进行建模,增强了模型语义表示能力。 |
-| [BERT](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleLARK/BERT)(Bidirectional Encoder Representation from Transformers) | 一个迁移能力很强的通用语义表示模型, 以 Transformer 为网络基本组件,以双向 Masked Language Model和 Next Sentence Prediction 为训练目标,通过预训练得到通用语义表示,再结合简单的输出层,应用到下游的 NLP 任务,在多个任务上取得了 SOTA 的结果。 |
-| [XLNet](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleLARK/XLNet)(XLNet: Generalized Autoregressive Pretraining for Language Understanding) | 重要的语义表示模型之一,引入 Transformer-XL 为骨架,以 Permutation Language Modeling 为优化目标,在若干下游任务上优于 BERT 的性能。 |
-| [ELMo](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleLARK/ELMo)(Embeddings from Language Models) | 重要的通用语义表示模型之一,以双向 LSTM 为网路基本组件,以 Language Model 为训练目标,通过预训练得到通用的语义表示,将通用的语义表示作为 Feature 迁移到下游 NLP 任务中,会显著提升下游任务的模型性能。 |
+| [BERT](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/pretrain_langauge_models/BERT)(Bidirectional Encoder Representation from Transformers) | 一个迁移能力很强的通用语义表示模型, 以 Transformer 为网络基本组件,以双向 Masked Language Model和 Next Sentence Prediction 为训练目标,通过预训练得到通用语义表示,再结合简单的输出层,应用到下游的 NLP 任务,在多个任务上取得了 SOTA 的结果。 |
+| [XLNet](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/pretrain_langauge_models/XLNet)(XLNet: Generalized Autoregressive Pretraining for Language Understanding) | 重要的语义表示模型之一,引入 Transformer-XL 为骨架,以 Permutation Language Modeling 为优化目标,在若干下游任务上优于 BERT 的性能。 |
+| [ELMo](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/pretrain_langauge_models/ELMo)(Embeddings from Language Models) | 重要的通用语义表示模型之一,以双向 LSTM 为网路基本组件,以 Language Model 为训练目标,通过预训练得到通用的语义表示,将通用的语义表示作为 Feature 迁移到下游 NLP 任务中,会显著提升下游任务的模型性能。 |
 
 #### 文本相似度计算
 
@@ -182,7 +182,7 @@ PaddlePaddle 提供了丰富的计算单元,使得用户可以采用模块化
 
 #### 文本生成
 
-[PaddleTextGEN](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleTextGEN) (Paddle Text Generation) ,一个基于 PaddlePaddle 的文本生成框架,提供了一些列经典文本生成模型案例,如 vanilla seq2seq,seq2seq with attention,variational seq2seq 模型等。
+[seq2seq](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/seq2seq) (Paddle Text Generation) ,一个基于 PaddlePaddle 的文本生成框架,提供了一些列经典文本生成模型案例,如 vanilla seq2seq,seq2seq with attention,variational seq2seq 模型等。
 
 ### NLP 系统应用
 
@@ -195,7 +195,7 @@ PaddlePaddle 提供了丰富的计算单元,使得用户可以采用模块化
 
 #### 阅读理解
 
-[PaddleMRC](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleMRC) (Paddle Machine Reading Comprehension),集合了百度在阅读理解领域相关的模型,工具,开源数据集等一系列工作。
+[machine_reading_comprehension](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/machine_reading_comprehension) (Paddle Machine Reading Comprehension),集合了百度在阅读理解领域相关的模型,工具,开源数据集等一系列工作。
 
 | 模型                                                         | 简介                                                         |
 | ------------------------------------------------------------ | ------------------------------------------------------------ |
@@ -205,16 +205,16 @@ PaddlePaddle 提供了丰富的计算单元,使得用户可以采用模块化
 
 #### 机器翻译
 
-[PaddleMT](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleMT) ,全称为Paddle Machine Translation,基于Transformer的经典机器翻译模型,基于论文 [Attention Is All You Need](https://arxiv.org/abs/1706.03762)。
+[machine_translation](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/machine_translation) ,全称为Paddle Machine Translation,基于Transformer的经典机器翻译模型,基于论文 [Attention Is All You Need](https://arxiv.org/abs/1706.03762)。
 
 #### 对话系统
 
-[PaddleDialogue](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleDialogue) 包含对话系统方向的模型、数据集和工具。
+[dialogue_system](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/dialogue_system) 包含对话系统方向的模型、数据集和工具。
 
 | 模型                                                         | 简介                                                         |
 | ------------------------------------------------------------ | ------------------------------------------------------------ |
-| [DGU](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleDialogue/dialogue_general_understanding) (Dialogue General Understanding,通用对话理解模型) | 覆盖了包括**检索式聊天系统**中 context-response matching 任务和**任务完成型对话系统**中**意图识别**,**槽位解析**,**状态追踪**等常见对话系统任务,在 6 项国际公开数据集中都获得了最佳效果。 |
-| [ADEM](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation) (Auto Dialogue Evaluation Model) | 评估开放领域对话系统的回复质量,能够帮助企业或个人快速评估对话系统的回复质量,减少人工评估成本。 |
+| [DGU](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/dialogue_system/dialogue_general_understanding) (Dialogue General Understanding,通用对话理解模型) | 覆盖了包括**检索式聊天系统**中 context-response matching 任务和**任务完成型对话系统**中**意图识别**,**槽位解析**,**状态追踪**等常见对话系统任务,在 6 项国际公开数据集中都获得了最佳效果。 |
+| [ADEM](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/dialogue_system/auto_dialogue_evaluation) (Auto Dialogue Evaluation Model) | 评估开放领域对话系统的回复质量,能够帮助企业或个人快速评估对话系统的回复质量,减少人工评估成本。 |
 | [Proactive Conversation](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/Research/ACL2019-DuConv) | 包含百度开源的知识驱动的开放领域对话数据集 [DuConv](https://ai.baidu.com/broad/subordinate?dataset=duconv),以及基线模型。对应论文 [Proactive Human-Machine Conversation with Explicit Conversation Goals](https://arxiv.org/abs/1906.05572) 发表于 ACL2019。 |
 | [DAM](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/Research/ACL2018-DAM)(Deep Attention Matching Network,深度注意力机制模型) | 开放领域多轮对话匹配模型,对应论文 [Multi-Turn Response Selection for Chatbots with Deep Attention Matching Network](https://aclweb.org/anthology/P18-1103/) 发表于 ACL2018。 |