diff --git a/.gitmodules b/.gitmodules index 5aa792e9f72831d8df8efd33dc744225988e72f7..464b36ae3542d12aee39d1a421350fcbf80912f9 100644 --- a/.gitmodules +++ b/.gitmodules @@ -13,6 +13,3 @@ [submodule "PaddleSpeech/DeepSpeech"] path = PaddleSpeech/DeepSpeech url = https://github.com/PaddlePaddle/DeepSpeech.git -[submodule "PaddleNLP/PALM"] - path = PaddleNLP/PALM - url = https://github.com/PaddlePaddle/PALM diff --git a/PaddleNLP/PALM b/PaddleNLP/PALM deleted file mode 160000 index 5426f75073cf5bd416622dbe71b146d3dc8fffb6..0000000000000000000000000000000000000000 --- a/PaddleNLP/PALM +++ /dev/null @@ -1 +0,0 @@ -Subproject commit 5426f75073cf5bd416622dbe71b146d3dc8fffb6 diff --git a/PaddleNLP/PaddleLARK/ERNIE b/PaddleNLP/PaddleLARK/ERNIE deleted file mode 160000 index 30b892e3c029bff706337f269e6c158b0a223f60..0000000000000000000000000000000000000000 --- a/PaddleNLP/PaddleLARK/ERNIE +++ /dev/null @@ -1 +0,0 @@ -Subproject commit 30b892e3c029bff706337f269e6c158b0a223f60 diff --git a/PaddleNLP/README.md b/PaddleNLP/README.md index aa26b2ecfc27444bdb63f588250a2fb93e2081cf..fe128fe6ff60174ef8daf4fc50915cba2b042e2d 100644 --- a/PaddleNLP/README.md +++ b/PaddleNLP/README.md @@ -10,7 +10,7 @@ - **丰富而全面的NLP任务支持:** - - PaddleNLP为您提供了多粒度,多场景的应用支持。涵盖了从[分词](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/lexical_analysis),[词性标注](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/lexical_analysis),[命名实体识别](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/lexical_analysis)等NLP基础技术,到[文本分类](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/sentiment_classification),[文本相似度计算](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/similarity_net),[语义表示](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleLARK),[文本生成](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleTextGEN)等NLP核心技术。同时,PaddleNLP还提供了针对常见NLP大型应用系统(如[阅读理解](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleMRC),[对话系统](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleDialogue),[机器翻译系统](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleMT)等)的特定核心技术和工具组件,模型和预训练参数等,让您在NLP领域畅通无阻。 + - PaddleNLP为您提供了多粒度,多场景的应用支持。涵盖了从[分词](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/lexical_analysis),[词性标注](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/lexical_analysis),[命名实体识别](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/lexical_analysis)等NLP基础技术,到[文本分类](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/sentiment_classification),[文本相似度计算](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/similarity_net),[语义表示](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/pretrain_langauge_models),[文本生成](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/seq2seq)等NLP核心技术。同时,PaddleNLP还提供了针对常见NLP大型应用系统(如[阅读理解](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/machine_reading_comprehension),[对话系统](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/dialogue_system),[机器翻译系统](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/machine_translation)等)的特定核心技术和工具组件,模型和预训练参数等,让您在NLP领域畅通无阻。 - **稳定可靠的NLP模型和强大的预训练参数:** @@ -50,17 +50,17 @@ cd models/PaddleNLP/sentiment_classification | 任务场景 | 对应项目/目录 | 简介 | | :------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | -| **中文分词**,**词性标注**,**命名实体识别**:fire: | [LAC](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/lexical_analysis) | LAC,全称为Lexical Analysis of Chinese,是百度内部广泛使用的中文处理工具,功能涵盖从中文分词,词性标注,命名实体识别等常见中文处理任务。 | -| **词向量(word2vec)** | [word2vec](https://github.com/PaddlePaddle/models/tree/develop/PaddleRec/word2vec) | 提供单机多卡,多机等分布式训练中文词向量能力,支持主流词向量模型(skip-gram,cbow等),可以快速使用自定义数据训练词向量模型。 | -| **语言模型** | [Language_model](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/language_model) | 基于循环神经网络(RNN)的经典神经语言模型(neural language model)。 | -| **情感分类**:fire: | [Senta](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/sentiment_classification),[EmotionDetection](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/emotion_detection) | Senta(Sentiment Classification,简称Senta)和EmotionDetection两个项目分别提供了面向*通用场景*和*人机对话场景专用*的情感倾向性分析模型。 | -| **文本相似度计算**:fire: | [SimNet](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/similarity_net) | SimNet,又称为Similarity Net,为您提供高效可靠的文本相似度计算工具和预训练模型。 | -| **语义表示**:fire: | [PaddleLARK](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleLARK) | PaddleLARK,全称为Paddle LAngauge Representation Toolkit,集成了ELMO,BERT,ERNIE 1.0,ERNIE 2.0,XLNet等热门中英文预训练模型。 | -| **文本生成** | [PaddleTextGEN](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleTextGEN) | Paddle Text Generation为您提供了一些列经典文本生成模型案例,如vanilla seq2seq,seq2seq with attention,variational seq2seq模型等。 | -| **阅读理解** | [PaddleMRC](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleMRC) | PaddleMRC,全称为Paddle Machine Reading Comprehension,集合了百度在阅读理解领域相关的模型,工具,开源数据等一系列工作。包括DuReader (百度开源的基于真实搜索用户行为的中文大规模阅读理解数据集),KT-Net (结合知识的阅读理解模型,SQuAD以及ReCoRD曾排名第一), D-Net (预训练-微调框架,在EMNLP2019 MRQA国际阅读理解评测获得第一),等。 | -| **对话系统** | [PaddleDialogue](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleDialogue) | 包括:1)DGU(Dialogue General Understanding,通用对话理解模型)覆盖了包括**检索式聊天系统**中context-response matching任务和**任务完成型对话系统**中**意图识别**,**槽位解析**,**状态追踪**等常见对话系统任务,在6项国际公开数据集中都获得了最佳效果。
2) knowledge-driven dialogue:百度开源的知识驱动的开放领域对话数据集,发表于ACL2019。
3)ADEM(Auto Dialogue Evaluation Model):对话自动评估模型,可用于自动评估不同对话生成模型的回复质量。 | -| **机器翻译** | [PaddleMT](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleMT) | 全称为Paddle Machine Translation,基于Transformer的经典机器翻译模型。 | -| **其他前沿工作** | [Research](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/Research) | 百度最新前沿工作开源。 | +| **中文分词**,**词性标注**,**命名实体识别**:fire: | [LAC](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/lexical_analysis) | LAC,全称为Lexical Analysis of Chinese,是百度内部广泛使用的中文处理工具,功能涵盖从中文分词,词性标注,命名实体识别等常见中文处理任务。 | +| **词向量(word2vec)** | [word2vec](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleRec/word2vec) | 提供单机多卡,多机等分布式训练中文词向量能力,支持主流词向量模型(skip-gram,cbow等),可以快速使用自定义数据训练词向量模型。 | +| **语言模型** | [Language_model](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/language_model) | 基于循环神经网络(RNN)的经典神经语言模型(neural language model)。 | +| **情感分类**:fire: | [Senta](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/sentiment_classification),[EmotionDetection](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/emotion_detection) | Senta(Sentiment Classification,简称Senta)和EmotionDetection两个项目分别提供了面向*通用场景*和*人机对话场景专用*的情感倾向性分析模型。 | +| **文本相似度计算**:fire: | [SimNet](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/similarity_net) | SimNet,又称为Similarity Net,为您提供高效可靠的文本相似度计算工具和预训练模型。 | +| **语义表示**:fire: | [pretrain_langauge_models](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/pretrain_langauge_models) | 集成了ELMO,BERT,ERNIE 1.0,ERNIE 2.0,XLNet等热门中英文预训练模型。 | +| **文本生成** | [seq2seq](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/PaddleTextGEN) | seq2seq为您提供了一些列经典文本生成模型案例,如vanilla seq2seq,seq2seq with attention,variational seq2seq模型等。 | +| **阅读理解** | [machine_reading_comprehension](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/machine_reading_comprehension) | Paddle Machine Reading Comprehension,集合了百度在阅读理解领域相关的模型,工具,开源数据等一系列工作。包括DuReader (百度开源的基于真实搜索用户行为的中文大规模阅读理解数据集),KT-Net (结合知识的阅读理解模型,SQuAD以及ReCoRD曾排名第一), D-Net (预训练-微调框架,在EMNLP2019 MRQA国际阅读理解评测获得第一),等。 | +| **对话系统** | [dialogue_system](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/dialogue_system) | 包括:1)DGU(Dialogue General Understanding,通用对话理解模型)覆盖了包括**检索式聊天系统**中context-response matching任务和**任务完成型对话系统**中**意图识别**,**槽位解析**,**状态追踪**等常见对话系统任务,在6项国际公开数据集中都获得了最佳效果。
2) knowledge-driven dialogue:百度开源的知识驱动的开放领域对话数据集,发表于ACL2019。
3)ADEM(Auto Dialogue Evaluation Model):对话自动评估模型,可用于自动评估不同对话生成模型的回复质量。 | +| **机器翻译** | [machine_translation](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/machine_translation) | 全称为Paddle Machine Translation,基于Transformer的经典机器翻译模型。 | +| **其他前沿工作** | [Research](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/Research) | 百度最新前沿工作开源。 | @@ -70,13 +70,13 @@ cd models/PaddleNLP/sentiment_classification ```text . ├── Research # 百度NLP在research方面的工作集合 -├── PaddleMT # 机器翻译相关代码,数据,预训练模型 -├── PaddleDialogue # 对话系统相关代码,数据,预训练模型 -├── PaddleMRC # 阅读理解相关代码,数据,预训练模型 -├── PaddleLARK # 语言表示工具箱 +├── machine_translation # 机器翻译相关代码,数据,预训练模型 +├── dialogue_system # 对话系统相关代码,数据,预训练模型 +├── machcine_reading_comprehension # 阅读理解相关代码,数据,预训练模型 +├── pretrain_langauge_models # 语言表示工具箱 ├── language_model # 语言模型 ├── lexical_analysis # LAC词法分析 -├── models # 共享网络 +├── shared_modules/models # 共享网络 │ ├── __init__.py │ ├── classification │ ├── dialogue_model_toolkit @@ -87,7 +87,7 @@ cd models/PaddleNLP/sentiment_classification │ ├── representation │ ├── sequence_labeling │ └── transformer_encoder.py -├── preprocess # 共享文本预处理工具 +├── shared_modules/preprocess # 共享文本预处理工具 │ ├── __init__.py │ ├── ernie │ ├── padding.py diff --git a/PaddleNLP/PaddleDialogue/README.md b/PaddleNLP/dialogue_system/README.md similarity index 100% rename from PaddleNLP/PaddleDialogue/README.md rename to PaddleNLP/dialogue_system/README.md diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/.run_ce.sh b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/.run_ce.sh similarity index 100% rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/.run_ce.sh rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/.run_ce.sh diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/README.md b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/README.md similarity index 99% rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/README.md rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/README.md index b668351b7cf69b803f4b12c2bead54843b86c9ed..94059e992966d66f222ef5c44201bcc7957621a6 100644 --- a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/README.md +++ b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/README.md @@ -468,7 +468,7 @@ python -u main.py \ --loss_type="CLS" ``` #### windows环境下: -评估: +评估: ``` python -u main.py --do_eval=true --use_cuda=false --evaluation_file=data\input\data\unlabel_data\test.ids --output_prediction_file=data\output\pretrain_matching_predict --loss_type=CLS ``` diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/_ce.py b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/_ce.py similarity index 87% rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/_ce.py rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/_ce.py index e29b5aa9e18c06cb7fdab33e59c5f2b9ed32db41..121a2c836e716808dce89ed3e648a2ec05fc41af 100644 --- a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/_ce.py +++ b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/_ce.py @@ -21,14 +21,16 @@ from kpi import DurationKpi train_loss_card1 = CostKpi('train_loss_card1', 0.03, 0, actived=True) train_loss_card4 = CostKpi('train_loss_card4', 0.03, 0, actived=True) -train_duration_card1 = DurationKpi('train_duration_card1', 0.01, 0, actived=True) -train_duration_card4 = DurationKpi('train_duration_card4', 0.01, 0, actived=True) +train_duration_card1 = DurationKpi( + 'train_duration_card1', 0.01, 0, actived=True) +train_duration_card4 = DurationKpi( + 'train_duration_card4', 0.01, 0, actived=True) tracking_kpis = [ - train_loss_card1, - train_loss_card4, - train_duration_card1, - train_duration_card4, + train_loss_card1, + train_loss_card4, + train_duration_card1, + train_duration_card4, ] diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/__init__.py b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/__init__.py similarity index 100% rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/__init__.py rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/__init__.py diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/evaluate.py b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/evaluate.py similarity index 100% rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/evaluate.py rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/evaluate.py diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/prepare_data_and_model.py b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/prepare_data_and_model.py similarity index 66% rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/prepare_data_and_model.py rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/prepare_data_and_model.py index bc1dce21fd2b08225ac44e88b1ccf18c7e9b85ee..539a1a1febf351870a70e73ceb598ed81b720066 100644 --- a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/prepare_data_and_model.py +++ b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/prepare_data_and_model.py @@ -20,48 +20,52 @@ import sys import io import os -URLLIB=urllib -if sys.version_info >= (3, 0): +URLLIB = urllib +if sys.version_info >= (3, 0): import urllib.request - URLLIB=urllib.request + URLLIB = urllib.request -DATA_MODEL_PATH = {"DATA_PATH": "https://baidu-nlp.bj.bcebos.com/auto_dialogue_evaluation_dataset-1.0.0.tar.gz", - "TRAINED_MODEL": "https://baidu-nlp.bj.bcebos.com/auto_dialogue_evaluation_models.2.0.0.tar.gz"} +DATA_MODEL_PATH = { + "DATA_PATH": + "https://baidu-nlp.bj.bcebos.com/auto_dialogue_evaluation_dataset-1.0.0.tar.gz", + "TRAINED_MODEL": + "https://baidu-nlp.bj.bcebos.com/auto_dialogue_evaluation_models.2.0.0.tar.gz" +} -PATH_MAP = {'DATA_PATH': "./data/input", - 'TRAINED_MODEL': './data/saved_models'} +PATH_MAP = {'DATA_PATH': "./data/input", 'TRAINED_MODEL': './data/saved_models'} -def un_tar(tar_name, dir_name): - try: +def un_tar(tar_name, dir_name): + try: t = tarfile.open(tar_name) - t.extractall(path = dir_name) + t.extractall(path=dir_name) return True except Exception as e: print(e) return False -def download_model_and_data(): +def download_model_and_data(): print("Downloading ade data, pretrain model and trained models......") print("This process is quite long, please wait patiently............") - for path in ['./data/input/data', './data/saved_models/trained_models']: - if not os.path.exists(path): + for path in ['./data/input/data', './data/saved_models/trained_models']: + if not os.path.exists(path): continue shutil.rmtree(path) - for path_key in DATA_MODEL_PATH: + for path_key in DATA_MODEL_PATH: filename = os.path.basename(DATA_MODEL_PATH[path_key]) - URLLIB.urlretrieve(DATA_MODEL_PATH[path_key], os.path.join("./", filename)) + URLLIB.urlretrieve(DATA_MODEL_PATH[path_key], + os.path.join("./", filename)) state = un_tar(filename, PATH_MAP[path_key]) - if not state: + if not state: print("Tar %s error....." % path_key) return False os.remove(filename) return True -if __name__ == "__main__": +if __name__ == "__main__": state = download_model_and_data() - if not state: + if not state: exit(1) print("Downloading data and models sucess......") diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/reader.py b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/reader.py similarity index 100% rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/reader.py rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/reader.py diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/utils/__init__.py b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/utils/__init__.py similarity index 100% rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/utils/__init__.py rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/utils/__init__.py diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/utils/configure.py b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/utils/configure.py similarity index 100% rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/utils/configure.py rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/utils/configure.py diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/utils/input_field.py b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/utils/input_field.py similarity index 93% rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/utils/input_field.py rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/utils/input_field.py index 6af31643d2587d0135b878f6f1f187b496955290..e90c7ff0376f4b9936c970fcafacd0ff01a303f4 100644 --- a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/utils/input_field.py +++ b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/utils/input_field.py @@ -25,8 +25,8 @@ import numpy as np import paddle.fluid as fluid -class InputField(object): - def __init__(self, input_field): +class InputField(object): + def __init__(self, input_field): """init inpit field""" self.context_wordseq = input_field[0] self.response_wordseq = input_field[1] diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/utils/model_check.py b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/utils/model_check.py similarity index 99% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/utils/model_check.py rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/utils/model_check.py index 013130cbb0a9f4d44edc28589bf83672b69abf26..dacf1a668238a22d94fbefb9f52b05620581cad2 100644 --- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/utils/model_check.py +++ b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/utils/model_check.py @@ -30,7 +30,7 @@ def check_cuda(use_cuda, err = \ if __name__ == "__main__": - + check_cuda(True) check_cuda(False) diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/utils/save_load_io.py b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/utils/save_load_io.py similarity index 98% rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/utils/save_load_io.py rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/utils/save_load_io.py index a992aec97881ed9834f614b50da30dddcc677858..bdcdd811dd1ebb27542e5facc3ba050f00df08f8 100644 --- a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/utils/save_load_io.py +++ b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade/utils/save_load_io.py @@ -69,8 +69,8 @@ def init_from_checkpoint(args, exe, program): def init_from_params(args, exe, program): assert isinstance(args.init_from_params, str) - - if not os.path.exists(args.init_from_params): + + if not os.path.exists(args.init_from_params): raise Warning("the params path does not exist.") return False @@ -122,5 +122,3 @@ def save_param(args, exe, program, dirname): print("save parameters at %s" % (os.path.join(param_dir, dirname))) return True - - diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade_net.py b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade_net.py similarity index 76% rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade_net.py rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade_net.py index 8907b798cf6e4457954c89997e7ad9a84fcbe61e..10db91859a3774b3bf24266d18ebf244850a3b24 100755 --- a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade_net.py +++ b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/ade_net.py @@ -21,14 +21,13 @@ import paddle import paddle.fluid as fluid -def create_net( - is_training, - model_input, - args, - clip_value=10.0, - word_emb_name="shared_word_emb", - lstm_W_name="shared_lstm_W", - lstm_bias_name="shared_lstm_bias"): +def create_net(is_training, + model_input, + args, + clip_value=10.0, + word_emb_name="shared_word_emb", + lstm_W_name="shared_lstm_W", + lstm_bias_name="shared_lstm_bias"): context_wordseq = model_input.context_wordseq response_wordseq = model_input.response_wordseq @@ -52,17 +51,15 @@ def create_net( initializer=fluid.initializer.Normal(scale=0.1))) #fc to fit dynamic LSTM - context_fc = fluid.layers.fc( - input=context_emb, - size=args.hidden_size * 4, - param_attr=fluid.ParamAttr(name='fc_weight'), - bias_attr=fluid.ParamAttr(name='fc_bias')) + context_fc = fluid.layers.fc(input=context_emb, + size=args.hidden_size * 4, + param_attr=fluid.ParamAttr(name='fc_weight'), + bias_attr=fluid.ParamAttr(name='fc_bias')) - response_fc = fluid.layers.fc( - input=response_emb, - size=args.hidden_size * 4, - param_attr=fluid.ParamAttr(name='fc_weight'), - bias_attr=fluid.ParamAttr(name='fc_bias')) + response_fc = fluid.layers.fc(input=response_emb, + size=args.hidden_size * 4, + param_attr=fluid.ParamAttr(name='fc_weight'), + bias_attr=fluid.ParamAttr(name='fc_bias')) #LSTM context_rep, _ = fluid.layers.dynamic_lstm( @@ -82,7 +79,7 @@ def create_net( logits = fluid.layers.bilinear_tensor_product( context_rep, response_rep, size=1) - if args.loss_type == 'CLS': + if args.loss_type == 'CLS': label = fluid.layers.cast(x=label, dtype='float32') loss = fluid.layers.sigmoid_cross_entropy_with_logits(logits, label) loss = fluid.layers.reduce_mean( @@ -95,10 +92,10 @@ def create_net( loss = fluid.layers.reduce_mean(loss) else: raise ValueError - - if is_training: + + if is_training: return loss - else: + else: return logits @@ -106,7 +103,5 @@ def set_word_embedding(word_emb, place, word_emb_name="shared_word_emb"): """ Set word embedding """ - word_emb_param = fluid.global_scope().find_var( - word_emb_name).get_tensor() + word_emb_param = fluid.global_scope().find_var(word_emb_name).get_tensor() word_emb_param.set(word_emb, place) - diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/data/config/ade.yaml b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/data/config/ade.yaml similarity index 100% rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/data/config/ade.yaml rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/data/config/ade.yaml diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/data/inference_models/inference_models.md b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/data/inference_models/inference_models.md similarity index 100% rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/data/inference_models/inference_models.md rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/data/inference_models/inference_models.md diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/data/input/input.md b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/data/input/input.md similarity index 100% rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/data/input/input.md rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/data/input/input.md diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/data/output/output.md b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/data/output/output.md similarity index 100% rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/data/output/output.md rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/data/output/output.md diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/data/pretrain_model/pretrain_model.md b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/data/pretrain_model/pretrain_model.md similarity index 100% rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/data/pretrain_model/pretrain_model.md rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/data/pretrain_model/pretrain_model.md diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/data/saved_models/saved_models.md b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/data/saved_models/saved_models.md similarity index 100% rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/data/saved_models/saved_models.md rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/data/saved_models/saved_models.md diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/eval.py b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/eval.py similarity index 86% rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/eval.py rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/eval.py index edac4669ffccd1d8e250c631c949ab620af14b7d..26f06f1c5150653e880056a820861e5cfb224ed3 100644 --- a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/eval.py +++ b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/eval.py @@ -23,13 +23,13 @@ import ade.evaluate as evaluate from ade.utils.configure import PDConfig -def do_eval(args): +def do_eval(args): """evaluate metrics""" labels = [] fr = io.open(args.evaluation_file, 'r', encoding="utf8") - for line in fr: + for line in fr: tokens = line.strip().split('\t') - assert len(tokens) == 3 + assert len(tokens) == 3 label = int(tokens[2]) labels.append(label) @@ -43,25 +43,25 @@ def do_eval(args): score = score.astype(np.float64) scores.append(score) - if args.loss_type == 'CLS': + if args.loss_type == 'CLS': recall_dict = evaluate.evaluate_Recall(list(zip(scores, labels))) mean_score = sum(scores) / len(scores) print('mean score: %.6f' % mean_score) print('evaluation recall result:') print('1_in_2: %.6f\t1_in_10: %.6f\t2_in_10: %.6f\t5_in_10: %.6f' % - (recall_dict['1_in_2'], recall_dict['1_in_10'], - recall_dict['2_in_10'], recall_dict['5_in_10'])) - elif args.loss_type == 'L2': + (recall_dict['1_in_2'], recall_dict['1_in_10'], + recall_dict['2_in_10'], recall_dict['5_in_10'])) + elif args.loss_type == 'L2': scores = [x[0] for x in scores] mean_score = sum(scores) / len(scores) cor = evaluate.evaluate_cor(scores, labels) print('mean score: %.6f\nevaluation cor results:%.6f' % - (mean_score, cor)) + (mean_score, cor)) else: raise ValueError - -if __name__ == "__main__": + +if __name__ == "__main__": args = PDConfig(yaml_file="./data/config/ade.yaml") args.build() diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/inference_model.py b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/inference_model.py similarity index 70% rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/inference_model.py rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/inference_model.py index ca0872d00f1a9e35efe9fdcef8ee25a15bf14889..5ef7bfc1a9378061d591308881253d06b4c1b3a0 100644 --- a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/inference_model.py +++ b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/inference_model.py @@ -42,22 +42,24 @@ def do_save_inference_model(args): with fluid.unique_name.guard(): context_wordseq = fluid.data( - name='context_wordseq', shape=[-1, 1], dtype='int64', lod_level=1) + name='context_wordseq', + shape=[-1, 1], + dtype='int64', + lod_level=1) response_wordseq = fluid.data( - name='response_wordseq', shape=[-1, 1], dtype='int64', lod_level=1) - labels = fluid.data( - name='labels', shape=[-1, 1], dtype='int64') + name='response_wordseq', + shape=[-1, 1], + dtype='int64', + lod_level=1) + labels = fluid.data(name='labels', shape=[-1, 1], dtype='int64') input_inst = [context_wordseq, response_wordseq, labels] input_field = InputField(input_inst) - data_reader = fluid.io.PyReader(feed_list=input_inst, - capacity=4, iterable=False) + data_reader = fluid.io.PyReader( + feed_list=input_inst, capacity=4, iterable=False) logits = create_net( - is_training=False, - model_input=input_field, - args=args - ) + is_training=False, model_input=input_field, args=args) if args.use_cuda: place = fluid.CUDAPlace(0) @@ -68,7 +70,7 @@ def do_save_inference_model(args): exe.run(startup_prog) assert (args.init_from_params) or (args.init_from_pretrain_model) - + if args.init_from_params: save_load_io.init_from_params(args, exe, test_prog) elif args.init_from_pretrain_model: @@ -76,24 +78,22 @@ def do_save_inference_model(args): # saving inference model fluid.io.save_inference_model( - args.inference_model_dir, - feeded_var_names=[ - input_field.context_wordseq.name, - input_field.response_wordseq.name, - ], - target_vars=[ - logits, - ], - executor=exe, - main_program=test_prog, - model_filename="model.pdmodel", - params_filename="params.pdparams") + args.inference_model_dir, + feeded_var_names=[ + input_field.context_wordseq.name, + input_field.response_wordseq.name, + ], + target_vars=[logits, ], + executor=exe, + main_program=test_prog, + model_filename="model.pdmodel", + params_filename="params.pdparams") print("save inference model at %s" % (args.inference_model_dir)) if __name__ == "__main__": - args = PDConfig(yaml_file="./data/config/ade.yaml") + args = PDConfig(yaml_file="./data/config/ade.yaml") args.build() check_cuda(args.use_cuda) diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/main.py b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/main.py similarity index 99% rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/main.py rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/main.py index d474aa190e6b7559e2ef69da507829eda2bf9206..f26969aa400589d41e6932277091b9b5374111ac 100644 --- a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/main.py +++ b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/main.py @@ -26,7 +26,6 @@ from inference_model import do_save_inference_model from ade.utils.configure import PDConfig - if __name__ == "__main__": args = PDConfig(yaml_file="./data/config/ade.yaml") diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/predict.py b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/predict.py similarity index 78% rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/predict.py rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/predict.py index 279dff8844a262fbec13df81513642f999d8592b..6f5a081f93cb1c81048b65e555378308968813aa 100644 --- a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/predict.py +++ b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/predict.py @@ -32,7 +32,7 @@ from ade.utils.model_check import check_cuda import ade.utils.save_load_io as save_load_io -def do_predict(args): +def do_predict(args): """ predict function """ @@ -46,30 +46,32 @@ def do_predict(args): with fluid.unique_name.guard(): context_wordseq = fluid.data( - name='context_wordseq', shape=[-1, 1], dtype='int64', lod_level=1) + name='context_wordseq', + shape=[-1, 1], + dtype='int64', + lod_level=1) response_wordseq = fluid.data( - name='response_wordseq', shape=[-1, 1], dtype='int64', lod_level=1) - labels = fluid.data( - name='labels', shape=[-1, 1], dtype='int64') + name='response_wordseq', + shape=[-1, 1], + dtype='int64', + lod_level=1) + labels = fluid.data(name='labels', shape=[-1, 1], dtype='int64') input_inst = [context_wordseq, response_wordseq, labels] input_field = InputField(input_inst) - data_reader = fluid.io.PyReader(feed_list=input_inst, - capacity=4, iterable=False) + data_reader = fluid.io.PyReader( + feed_list=input_inst, capacity=4, iterable=False) logits = create_net( - is_training=False, - model_input=input_field, - args=args - ) + is_training=False, model_input=input_field, args=args) logits.persistable = True fetch_list = [logits.name] #for_test is True if change the is_test attribute of operators to True test_prog = test_prog.clone(for_test=True) - if args.use_cuda: + if args.use_cuda: place = fluid.CUDAPlace(int(os.getenv('FLAGS_selected_gpus', '0'))) - else: + else: place = fluid.CPUPlace() exe = fluid.Executor(place) @@ -85,42 +87,39 @@ def do_predict(args): processor = reader.DataProcessor( data_path=args.predict_file, - max_seq_length=args.max_seq_len, + max_seq_length=args.max_seq_len, batch_size=args.batch_size) batch_generator = processor.data_generator( - place=place, - phase="test", - shuffle=False, - sample_pro=1) + place=place, phase="test", shuffle=False, sample_pro=1) num_test_examples = processor.get_num_examples(phase='test') data_reader.decorate_batch_generator(batch_generator) data_reader.start() scores = [] - while True: - try: + while True: + try: results = exe.run(compiled_test_prog, fetch_list=fetch_list) scores.extend(results[0]) except fluid.core.EOFException: data_reader.reset() break - scores = scores[: num_test_examples] + scores = scores[:num_test_examples] print("Write the predicted results into the output_prediction_file") fw = io.open(args.output_prediction_file, 'w', encoding="utf8") - for index, score in enumerate(scores): + for index, score in enumerate(scores): fw.write("%s\t%s\n" % (index, score)) print("finish........................................") -if __name__ == "__main__": - +if __name__ == "__main__": + args = PDConfig(yaml_file="./data/config/ade.yaml") args.build() args.Print() check_cuda(args.use_cuda) - do_predict(args) + do_predict(args) diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/run.sh b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/run.sh similarity index 100% rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/run.sh rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/run.sh diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/train.py b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/train.py similarity index 71% rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/train.py rename to PaddleNLP/dialogue_system/auto_dialogue_evaluation/train.py index f9a8b28153899bf7e5aaa34c19276fa0158043ce..c1939866d5a3905b0e3ffb976ee042b5c0a65cd9 100755 --- a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/train.py +++ b/PaddleNLP/dialogue_system/auto_dialogue_evaluation/train.py @@ -31,7 +31,7 @@ from ade.utils.input_field import InputField from ade.utils.model_check import check_cuda import ade.utils.save_load_io as save_load_io -try: +try: import cPickle as pickle #python 2 except ImportError as e: import pickle #python 3 @@ -47,24 +47,26 @@ def do_train(args): train_prog.random_seed = args.random_seed startup_prog.random_seed = args.random_seed - with fluid.unique_name.guard(): + with fluid.unique_name.guard(): context_wordseq = fluid.data( - name='context_wordseq', shape=[-1, 1], dtype='int64', lod_level=1) + name='context_wordseq', + shape=[-1, 1], + dtype='int64', + lod_level=1) response_wordseq = fluid.data( - name='response_wordseq', shape=[-1, 1], dtype='int64', lod_level=1) - labels = fluid.data( - name='labels', shape=[-1, 1], dtype='int64') + name='response_wordseq', + shape=[-1, 1], + dtype='int64', + lod_level=1) + labels = fluid.data(name='labels', shape=[-1, 1], dtype='int64') input_inst = [context_wordseq, response_wordseq, labels] input_field = InputField(input_inst) - data_reader = fluid.io.PyReader(feed_list=input_inst, - capacity=4, iterable=False) + data_reader = fluid.io.PyReader( + feed_list=input_inst, capacity=4, iterable=False) loss = create_net( - is_training=True, - model_input=input_field, - args=args - ) + is_training=True, model_input=input_field, args=args) loss.persistable = True # gradient clipping fluid.clip.set_gradient_clip(clip=fluid.clip.GradientClipByValue( @@ -74,20 +76,21 @@ def do_train(args): if args.use_cuda: dev_count = fluid.core.get_cuda_device_count() - place = fluid.CUDAPlace(int(os.getenv('FLAGS_selected_gpus', '0'))) - else: + place = fluid.CUDAPlace( + int(os.getenv('FLAGS_selected_gpus', '0'))) + else: dev_count = int(os.environ.get('CPU_NUM', 1)) place = fluid.CPUPlace() processor = reader.DataProcessor( data_path=args.training_file, - max_seq_length=args.max_seq_len, + max_seq_length=args.max_seq_len, batch_size=args.batch_size) batch_generator = processor.data_generator( place=place, phase="train", - shuffle=True, + shuffle=True, sample_pro=args.sample_pro) num_train_examples = processor.get_num_examples(phase='train') @@ -105,18 +108,23 @@ def do_train(args): args.init_from_pretrain_model == "") #init from some checkpoint, to resume the previous training - if args.init_from_checkpoint: + if args.init_from_checkpoint: save_load_io.init_from_checkpoint(args, exe, train_prog) #init from some pretrain models, to better solve the current task - if args.init_from_pretrain_model: + if args.init_from_pretrain_model: save_load_io.init_from_pretrain_model(args, exe, train_prog) if args.word_emb_init: print("start loading word embedding init ...") if six.PY2: - word_emb = np.array(pickle.load(io.open(args.word_emb_init, 'rb'))).astype('float32') + word_emb = np.array( + pickle.load(io.open(args.word_emb_init, 'rb'))).astype( + 'float32') else: - word_emb = np.array(pickle.load(io.open(args.word_emb_init, 'rb'), encoding="bytes")).astype('float32') + word_emb = np.array( + pickle.load( + io.open(args.word_emb_init, 'rb'), + encoding="bytes")).astype('float32') set_word_embedding(word_emb, place) print("finish init word embedding ...") @@ -124,69 +132,74 @@ def do_train(args): build_strategy.enable_inplace = True compiled_train_prog = fluid.CompiledProgram(train_prog).with_data_parallel( - loss_name=loss.name, build_strategy=build_strategy) + loss_name=loss.name, build_strategy=build_strategy) steps = 0 begin_time = time.time() - time_begin = time.time() + time_begin = time.time() - for epoch_step in range(args.epoch): + for epoch_step in range(args.epoch): data_reader.start() sum_loss = 0.0 ce_loss = 0.0 while True: - try: + try: fetch_list = [loss.name] outputs = exe.run(compiled_train_prog, fetch_list=fetch_list) np_loss = outputs sum_loss += np.array(np_loss).mean() ce_loss = np.array(np_loss).mean() - if steps % args.print_steps == 0: + if steps % args.print_steps == 0: time_end = time.time() used_time = time_end - time_begin current_time = time.strftime('%Y-%m-%d %H:%M:%S', - time.localtime(time.time())) - print('%s epoch: %d, step: %s, avg loss %s, speed: %f steps/s' % (current_time, epoch_step, steps, sum_loss / args.print_steps, args.print_steps / used_time)) + time.localtime(time.time())) + print( + '%s epoch: %d, step: %s, avg loss %s, speed: %f steps/s' + % (current_time, epoch_step, steps, sum_loss / + args.print_steps, args.print_steps / used_time)) sum_loss = 0.0 time_begin = time.time() - if steps % args.save_steps == 0: + if steps % args.save_steps == 0: if args.save_checkpoint: - save_load_io.save_checkpoint(args, exe, train_prog, "step_" + str(steps)) - if args.save_param: - save_load_io.save_param(args, exe, train_prog, "step_" + str(steps)) + save_load_io.save_checkpoint(args, exe, train_prog, + "step_" + str(steps)) + if args.save_param: + save_load_io.save_param(args, exe, train_prog, + "step_" + str(steps)) steps += 1 - except fluid.core.EOFException: + except fluid.core.EOFException: data_reader.reset() break - - if args.save_checkpoint: + + if args.save_checkpoint: save_load_io.save_checkpoint(args, exe, train_prog, "step_final") - if args.save_param: + if args.save_param: save_load_io.save_param(args, exe, train_prog, "step_final") - def get_cards(): + def get_cards(): num = 0 cards = os.environ.get('CUDA_VISIBLE_DEVICES', '') - if cards != '': + if cards != '': num = len(cards.split(",")) return num - if args.enable_ce: + if args.enable_ce: card_num = get_cards() pass_time_cost = time.time() - begin_time print("test_card_num", card_num) print("kpis\ttrain_duration_card%s\t%s" % (card_num, pass_time_cost)) print("kpis\ttrain_loss_card%s\t%f" % (card_num, ce_loss)) - + if __name__ == '__main__': - + args = PDConfig(yaml_file="./data/config/ade.yaml") args.build() args.Print() check_cuda(args.use_cuda) - + do_train(args) diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/.run_ce.sh b/PaddleNLP/dialogue_system/dialogue_general_understanding/.run_ce.sh similarity index 100% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/.run_ce.sh rename to PaddleNLP/dialogue_system/dialogue_general_understanding/.run_ce.sh diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/README.md b/PaddleNLP/dialogue_system/dialogue_general_understanding/README.md similarity index 97% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/README.md rename to PaddleNLP/dialogue_system/dialogue_general_understanding/README.md index 9942c8cdd07b91b7850f8143e1c55a2145a5900a..2a817ec15a60e4d3c666abf29011b819429e6a66 100644 --- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/README.md +++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/README.md @@ -62,7 +62,7 @@ SWDA:Switchboard Dialogue Act Corpus;     数据集、相关模型下载:     linux环境下: ``` -python dgu/prepare_data_and_model.py +python dgu/prepare_data_and_model.py ```     数据路径:data/input/data @@ -72,7 +72,7 @@ python dgu/prepare_data_and_model.py     windows环境下: ``` -python dgu\prepare_data_and_model.py +python dgu\prepare_data_and_model.py ```     下载的数据集中已提供了训练集,测试集和验证集,用户如果需要重新生成某任务数据集的训练数据,可执行: @@ -164,19 +164,19 @@ task_type: train,predict, evaluate, inference, all, 选择5个参数选项中 训练示例: bash run.sh atis_intent train ``` -    如果为CPU训练: +    如果为CPU训练: ``` -请将run.sh内参数设置为: +请将run.sh内参数设置为: 1、export CUDA_VISIBLE_DEVICES= ``` -    如果为GPU训练: +    如果为GPU训练: ``` -请将run.sh内参数设置为: +请将run.sh内参数设置为: 1、如果为单卡训练(用户指定空闲的单卡): -export CUDA_VISIBLE_DEVICES=0 +export CUDA_VISIBLE_DEVICES=0 2、如果为多卡训练(用户指定空闲的多张卡): export CUDA_VISIBLE_DEVICES=0,1,2,3 ``` @@ -252,19 +252,19 @@ task_type: train,predict, evaluate, inference, all, 选择5个参数选项中 预测示例: bash run.sh atis_intent predict ``` -    如果为CPU预测: +    如果为CPU预测: ``` -请将run.sh内参数设置为: +请将run.sh内参数设置为: 1、export CUDA_VISIBLE_DEVICES= ``` -    如果为GPU预测: +    如果为GPU预测: ``` -请将run.sh内参数设置为: +请将run.sh内参数设置为: 支持单卡预测(用户指定空闲的单卡): -export CUDA_VISIBLE_DEVICES=0 +export CUDA_VISIBLE_DEVICES=0 ``` 注:预测时,如采用方式一,用户可通过修改run.sh中init_from_params参数来指定自己训练好的需要预测的模型,目前代码中默认为加载官方已经训练好的模型; @@ -348,7 +348,7 @@ task_type: train,predict, evaluate, inference, all, 选择5个参数选项中 注:评估计算ground_truth和predict_label之间的打分,默认CPU计算即可; -####     方式二: 执行评估相关的代码: +####     方式二: 执行评估相关的代码: ``` TASK_NAME="atis_intent" #指定预测的任务名称 @@ -363,7 +363,7 @@ python -u main.py \ #### windows环境下 ``` -python -u main.py --task_name=atis_intent --use_cuda=false --do_eval=true --evaluation_file=data\input\data\atis\atis_intent\test.txt --output_prediction_file=data\output\pred_atis_intent +python -u main.py --task_name=atis_intent --use_cuda=false --do_eval=true --evaluation_file=data\input\data\atis\atis_intent\test.txt --output_prediction_file=data\output\pred_atis_intent ``` ### 模型推断 @@ -378,22 +378,22 @@ task_type: train,predict, evaluate, inference, all, 选择5个参数选项中 保存模型示例: bash run.sh atis_intent inference ``` -    如果为CPU执行inference model过程: +    如果为CPU执行inference model过程: ``` -请将run.sh内参数设置为: +请将run.sh内参数设置为: 1、export CUDA_VISIBLE_DEVICES= ```     如果为GPU执行inference model过程: ``` -请将run.sh内参数设置为: +请将run.sh内参数设置为: 1、单卡模型推断(用户指定空闲的单卡): export CUDA_VISIBLE_DEVICES=0 ``` -####     方式二: 执行inference model相关的代码: +####     方式二: 执行inference model相关的代码: ``` TASK_NAME="atis_intent" #指定预测的任务名称 @@ -459,7 +459,7 @@ python -u main.py \     用户也可以根据自己的需求,组建自定义的模型,具体方法如下所示: -    a、自定义数据 +    a、自定义数据       如用户目前有数据集为**task_name**, 则在**data/input/data**下定义**task_name**文件夹,将数据集存放进去;在**dgu/reader.py**中,新增自定义的数据处理的类,如**udc**数据集对应**UDCProcessor**; 在**train.py**内设置**task_name**和**processor**的对应关系(如**processors = {'udc': reader.UDCProcessor}**). @@ -481,7 +481,7 @@ python -u main.py \ - Elizabeth Shriberg, Raj Dhillon, Sonali Bhagat, JeremyAng, and Hannah Carvey. 2004. The icsi meetingrecorder dialog act (mrda) corpus. Technical report,INTERNATIONAL COMPUTER SCIENCE INSTBERKELEY CA. - Andreas Stolcke, Klaus Ries, Noah Coccaro, Eliza-beth Shriberg, Rebecca Bates, Daniel Jurafsky, PaulTaylor, Rachel Martin, Carol Van Ess-Dykema, andMarie Meteer. 2000. Dialogue act modeling for au-tomatic tagging and recognition of conversationalspeech.Computational linguistics, 26(3):339–373. - Ye-Yi Wang, Li Deng, and Alex Acero. 2005. Spo-ken language understanding.IEEE Signal Process-ing Magazine, 22(5):16–31.Jason Williams, Antoine Raux, Deepak Ramachan-dran, and Alan Black. 2013. The dialog state tracking challenge. InProceedings of the SIGDIAL 2013Conference, pages 404–413. -- Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc VLe, Mohammad Norouzi, Wolfgang Macherey,Maxim Krikun, Yuan Cao, Qin Gao, KlausMacherey, et al. 2016. Google’s neural ma-chine translation system: Bridging the gap betweenhuman and machine translation.arXiv preprintarXiv:1609.08144.Kaisheng +- Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc VLe, Mohammad Norouzi, Wolfgang Macherey,Maxim Krikun, Yuan Cao, Qin Gao, KlausMacherey, et al. 2016. Google’s neural ma-chine translation system: Bridging the gap betweenhuman and machine translation.arXiv preprintarXiv:1609.08144.Kaisheng - Yao, Geoffrey Zweig, Mei-Yuh Hwang,Yangyang Shi, and Dong Yu. 2013. Recurrent neu-ral networks for language understanding. InInter-speech, pages 2524–2528. - Xiangyang Zhou, Lu Li, Daxiang Dong, Yi Liu, YingChen, Wayne Xin Zhao, Dianhai Yu, and Hua Wu.2018. Multi-turn response selection for chatbotswith deep attention matching network. InProceed-ings of the 56th Annual Meeting of the Associationfor Computational Linguistics (Volume 1: Long Pa-pers), volume 1, pages 1118–1127. - Su Zhu and Kai Yu. 2017. Encoder-decoder withfocus-mechanism for sequence labelling based spo-ken language understanding. In2017 IEEE Interna-tional Conference on Acoustics, Speech and SignalProcessing (ICASSP), pages 5675–5679. IEEE. diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/_ce.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/_ce.py similarity index 67% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/_ce.py rename to PaddleNLP/dialogue_system/dialogue_general_understanding/_ce.py index 0a3a4c20d51845a2deb5a9aad04d7af080d0445b..f9e2798d6469eb6e10a774d1a62cd90a2ed11385 100644 --- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/_ce.py +++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/_ce.py @@ -20,20 +20,26 @@ from kpi import CostKpi from kpi import DurationKpi from kpi import AccKpi -each_step_duration_atis_slot_card1 = DurationKpi('each_step_duration_atis_slot_card1', 0.01, 0, actived=True) -train_loss_atis_slot_card1 = CostKpi('train_loss_atis_slot_card1', 0.08, 0, actived=True) -train_acc_atis_slot_card1 = CostKpi('train_acc_atis_slot_card1', 0.01, 0, actived=True) -each_step_duration_atis_slot_card4 = DurationKpi('each_step_duration_atis_slot_card4', 0.06, 0, actived=True) -train_loss_atis_slot_card4 = CostKpi('train_loss_atis_slot_card4', 0.03, 0, actived=True) -train_acc_atis_slot_card4 = CostKpi('train_acc_atis_slot_card4', 0.01, 0, actived=True) +each_step_duration_atis_slot_card1 = DurationKpi( + 'each_step_duration_atis_slot_card1', 0.01, 0, actived=True) +train_loss_atis_slot_card1 = CostKpi( + 'train_loss_atis_slot_card1', 0.08, 0, actived=True) +train_acc_atis_slot_card1 = CostKpi( + 'train_acc_atis_slot_card1', 0.01, 0, actived=True) +each_step_duration_atis_slot_card4 = DurationKpi( + 'each_step_duration_atis_slot_card4', 0.06, 0, actived=True) +train_loss_atis_slot_card4 = CostKpi( + 'train_loss_atis_slot_card4', 0.03, 0, actived=True) +train_acc_atis_slot_card4 = CostKpi( + 'train_acc_atis_slot_card4', 0.01, 0, actived=True) tracking_kpis = [ - each_step_duration_atis_slot_card1, - train_loss_atis_slot_card1, - train_acc_atis_slot_card1, - each_step_duration_atis_slot_card4, - train_loss_atis_slot_card4, - train_acc_atis_slot_card4, + each_step_duration_atis_slot_card1, + train_loss_atis_slot_card1, + train_acc_atis_slot_card1, + each_step_duration_atis_slot_card4, + train_loss_atis_slot_card4, + train_acc_atis_slot_card4, ] diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/data/config/dgu.yaml b/PaddleNLP/dialogue_system/dialogue_general_understanding/data/config/dgu.yaml similarity index 100% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/data/config/dgu.yaml rename to PaddleNLP/dialogue_system/dialogue_general_understanding/data/config/dgu.yaml diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/data/inference_models/inference_models.md b/PaddleNLP/dialogue_system/dialogue_general_understanding/data/inference_models/inference_models.md similarity index 100% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/data/inference_models/inference_models.md rename to PaddleNLP/dialogue_system/dialogue_general_understanding/data/inference_models/inference_models.md diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/data/input/input.md b/PaddleNLP/dialogue_system/dialogue_general_understanding/data/input/input.md similarity index 100% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/data/input/input.md rename to PaddleNLP/dialogue_system/dialogue_general_understanding/data/input/input.md diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/data/output/output.md b/PaddleNLP/dialogue_system/dialogue_general_understanding/data/output/output.md similarity index 100% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/data/output/output.md rename to PaddleNLP/dialogue_system/dialogue_general_understanding/data/output/output.md diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/data/pretrain_model/pretrain_model.md b/PaddleNLP/dialogue_system/dialogue_general_understanding/data/pretrain_model/pretrain_model.md similarity index 100% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/data/pretrain_model/pretrain_model.md rename to PaddleNLP/dialogue_system/dialogue_general_understanding/data/pretrain_model/pretrain_model.md diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/data/saved_models/saved_models.md b/PaddleNLP/dialogue_system/dialogue_general_understanding/data/saved_models/saved_models.md similarity index 100% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/data/saved_models/saved_models.md rename to PaddleNLP/dialogue_system/dialogue_general_understanding/data/saved_models/saved_models.md diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/__init__.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/__init__.py similarity index 100% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/__init__.py rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/__init__.py diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/batching.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/batching.py similarity index 88% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/batching.py rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/batching.py index d668fd7dfeb0d0fa0bf025a13b8838f700d130b8..a81d2e9960d174f7c1be938c4968db188c2bb9ee 100644 --- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/batching.py +++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/batching.py @@ -75,8 +75,8 @@ def mask(batch_tokens, total_token_num, vocab_size, CLS=1, SEP=2, MASK=3): def prepare_batch_data(task_name, - insts, - max_len, + insts, + max_len, total_token_num, voc_size=0, pad_id=None, @@ -98,14 +98,18 @@ def prepare_batch_data(task_name, # compatible with squad, whose example includes start/end positions, # or unique id - if isinstance(insts[0][3], list): - if task_name == "atis_slot": - labels_list = [inst[3] + [0] * (max_len - len(inst[3])) for inst in insts] - labels_list = [np.array(labels_list).astype("int64").reshape([-1, max_len])] - elif task_name == "dstc2": + if isinstance(insts[0][3], list): + if task_name == "atis_slot": + labels_list = [ + inst[3] + [0] * (max_len - len(inst[3])) for inst in insts + ] + labels_list = [ + np.array(labels_list).astype("int64").reshape([-1, max_len]) + ] + elif task_name == "dstc2": labels_list = [inst[3] for inst in insts] labels_list = [np.array(labels_list).astype("int64")] - else: + else: for i in range(3, len(insts[0]), 1): labels = [inst[i] for inst in insts] labels = np.array(labels).astype("int64").reshape([-1, 1]) @@ -124,28 +128,25 @@ def prepare_batch_data(task_name, out = batch_src_ids # Second step: padding src_id, self_input_mask = pad_batch_data( - out, - max_len, - pad_idx=pad_id, - return_input_mask=True) + out, max_len, pad_idx=pad_id, return_input_mask=True) pos_id = pad_batch_data( - batch_pos_ids, - max_len, - pad_idx=pad_id, - return_pos=False, + batch_pos_ids, + max_len, + pad_idx=pad_id, + return_pos=False, return_input_mask=False) sent_id = pad_batch_data( - batch_sent_ids, - max_len, - pad_idx=pad_id, - return_pos=False, + batch_sent_ids, + max_len, + pad_idx=pad_id, + return_pos=False, return_input_mask=False) if mask_id >= 0: return_list = [ src_id, pos_id, sent_id, self_input_mask, mask_label, mask_pos ] + labels_list - else: + else: return_list = [src_id, pos_id, sent_id, self_input_mask] + labels_list return return_list if len(return_list) > 1 else return_list[0] @@ -163,13 +164,13 @@ def pad_batch_data(insts, corresponding position data and attention bias. """ return_list = [] - max_len = max_len_in if max_len_in != -1 else max(len(inst) for inst in insts) + max_len = max_len_in if max_len_in != -1 else max( + len(inst) for inst in insts) # Any token included in dict can be used to pad, since the paddings' loss # will be masked out by weights and make no effect on parameter gradients. inst_data = np.array( - [inst + list([pad_idx] * (max_len - len(inst))) for inst in insts - ]) + [inst + list([pad_idx] * (max_len - len(inst))) for inst in insts]) return_list += [inst_data.astype("int64").reshape([-1, max_len])] # position data @@ -183,10 +184,10 @@ def pad_batch_data(insts, if return_input_mask: # This is used to avoid attention on paddings. - input_mask_data = np.array([[1] * len(inst) + [0] * + input_mask_data = np.array([[1] * len(inst) + [0] * (max_len - len(inst)) for inst in insts]) input_mask_data = np.expand_dims(input_mask_data, axis=-1) - return_list += [input_mask_data.astype("float32")] + return_list += [input_mask_data.astype("float32")] if return_max_len: return_list += [max_len] diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/bert.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/bert.py similarity index 100% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/bert.py rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/bert.py diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/define_paradigm.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/define_paradigm.py similarity index 100% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/define_paradigm.py rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/define_paradigm.py diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/define_predict_pack.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/define_predict_pack.py similarity index 66% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/define_predict_pack.py rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/define_predict_pack.py index df0fda32d44a4024805937e59c3f2636f74f9286..13b14f8102f51cfb6b74769d8eb392f0b102d65b 100644 --- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/define_predict_pack.py +++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/define_predict_pack.py @@ -21,31 +21,34 @@ import paddle import paddle.fluid as fluid -class DefinePredict(object): +class DefinePredict(object): """ Packaging Prediction Results """ - def __init__(self): + + def __init__(self): """ init """ - self.task_map = {'udc': 'get_matching_res', - 'swda': 'get_cls_res', - 'mrda': 'get_cls_res', - 'atis_intent': 'get_cls_res', - 'atis_slot': 'get_sequence_tagging', - 'dstc2': 'get_multi_cls_res', - 'dstc2_asr': 'get_multi_cls_res', - 'multi-woz': 'get_multi_cls_res'} + self.task_map = { + 'udc': 'get_matching_res', + 'swda': 'get_cls_res', + 'mrda': 'get_cls_res', + 'atis_intent': 'get_cls_res', + 'atis_slot': 'get_sequence_tagging', + 'dstc2': 'get_multi_cls_res', + 'dstc2_asr': 'get_multi_cls_res', + 'multi-woz': 'get_multi_cls_res' + } - def get_matching_res(self, probs, params=None): + def get_matching_res(self, probs, params=None): """ get matching score """ probs = list(probs) return probs[1] - def get_cls_res(self, probs, params=None): + def get_cls_res(self, probs, params=None): """ get da classify tag """ @@ -54,7 +57,7 @@ class DefinePredict(object): tag = probs.index(max_prob) return tag - def get_sequence_tagging(self, probs, params=None): + def get_sequence_tagging(self, probs, params=None): """ get sequence tagging tag """ @@ -63,23 +66,19 @@ class DefinePredict(object): labels = [" ".join([str(l) for l in list(l_l)]) for l_l in batch_labels] return labels - def get_multi_cls_res(self, probs, params=None): + def get_multi_cls_res(self, probs, params=None): """ get dst classify tag """ labels = [] probs = list(probs) - for i in range(len(probs)): - if probs[i] >= 0.5: + for i in range(len(probs)): + if probs[i] >= 0.5: labels.append(i) - if not labels: + if not labels: max_prob = max(probs) label_str = str(probs.index(max_prob)) - else: + else: label_str = " ".join([str(l) for l in sorted(labels)]) return label_str - - - - diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/evaluation.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/evaluation.py similarity index 100% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/evaluation.py rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/evaluation.py diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/optimization.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/optimization.py similarity index 100% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/optimization.py rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/optimization.py diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/prepare_data_and_model.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/prepare_data_and_model.py similarity index 58% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/prepare_data_and_model.py rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/prepare_data_and_model.py index d47f8c4bb183811617d7612f31a7f706f6accda9..83c72064c7cf9e0bc8822f0254fc468d51678f3e 100644 --- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/prepare_data_and_model.py +++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/prepare_data_and_model.py @@ -20,51 +20,60 @@ import sys import io import os - -URLLIB=urllib -if sys.version_info >= (3, 0): +URLLIB = urllib +if sys.version_info >= (3, 0): import urllib.request - URLLIB=urllib.request + URLLIB = urllib.request -DATA_MODEL_PATH = {"DATA_PATH": "https://baidu-nlp.bj.bcebos.com/dmtk_data_1.0.0.tar.gz", - "PRETRAIN_MODEL": "https://bert-models.bj.bcebos.com/uncased_L-12_H-768_A-12.tar.gz", - "TRAINED_MODEL": "https://baidu-nlp.bj.bcebos.com/dgu_models_2.0.0.tar.gz"} +DATA_MODEL_PATH = { + "DATA_PATH": "https://baidu-nlp.bj.bcebos.com/dmtk_data_1.0.0.tar.gz", + "PRETRAIN_MODEL": + "https://bert-models.bj.bcebos.com/uncased_L-12_H-768_A-12.tar.gz", + "TRAINED_MODEL": "https://baidu-nlp.bj.bcebos.com/dgu_models_2.0.0.tar.gz" +} -PATH_MAP = {'DATA_PATH': "./data/input", - 'PRETRAIN_MODEL': './data/pretrain_model', - 'TRAINED_MODEL': './data/saved_models'} +PATH_MAP = { + 'DATA_PATH': "./data/input", + 'PRETRAIN_MODEL': './data/pretrain_model', + 'TRAINED_MODEL': './data/saved_models' +} -def un_tar(tar_name, dir_name): - try: +def un_tar(tar_name, dir_name): + try: t = tarfile.open(tar_name) - t.extractall(path = dir_name) + t.extractall(path=dir_name) return True except Exception as e: print(e) return False -def download_model_and_data(): +def download_model_and_data(): print("Downloading dgu data, pretrain model and trained models......") print("This process is quite long, please wait patiently............") - for path in ['./data/input/data', './data/pretrain_model/uncased_L-12_H-768_A-12', './data/saved_models/trained_models']: - if not os.path.exists(path): + for path in [ + './data/input/data', + './data/pretrain_model/uncased_L-12_H-768_A-12', + './data/saved_models/trained_models' + ]: + if not os.path.exists(path): continue shutil.rmtree(path) - for path_key in DATA_MODEL_PATH: + for path_key in DATA_MODEL_PATH: filename = os.path.basename(DATA_MODEL_PATH[path_key]) - URLLIB.urlretrieve(DATA_MODEL_PATH[path_key], os.path.join("./", filename)) + URLLIB.urlretrieve(DATA_MODEL_PATH[path_key], + os.path.join("./", filename)) state = un_tar(filename, PATH_MAP[path_key]) - if not state: + if not state: print("Tar %s error....." % path_key) return False os.remove(filename) return True -if __name__ == "__main__": +if __name__ == "__main__": state = download_model_and_data() - if not state: + if not state: exit(1) print("Downloading data and models sucess......") diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/reader.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/reader.py similarity index 100% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/reader.py rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/reader.py diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/README.md b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/README.md similarity index 99% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/README.md rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/README.md index 0f6f4f410fac28a469050b64fce77adaa1824671..351dc8d249ec1b460e9ef00ac100f56579791f07 100644 --- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/README.md +++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/README.md @@ -6,7 +6,7 @@ scripts:运行数据处理脚本目录, 将官方公开数据集转换成模 python run_build_data.py udc 生成数据在dialogue_general_understanding/data/input/data/udc -2)、生成DA任务所需要的训练集、开发集、测试集时: +2)、生成DA任务所需要的训练集、开发集、测试集时: python run_build_data.py swda python run_build_data.py mrda 生成数据分别在dialogue_general_understanding/data/input/data/swda和dialogue_general_understanding/data/input/data/mrda @@ -19,6 +19,3 @@ python run_build_data.py udc python run_build_data.py atis 生成槽位识别数据在dialogue_general_understanding/data/input/data/atis/atis_slot 生成意图识别数据在dialogue_general_understanding/data/input/data/atis/atis_intent - - - diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/build_atis_dataset.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/build_atis_dataset.py similarity index 77% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/build_atis_dataset.py rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/build_atis_dataset.py index 09f3746039d3d55f9b824e76bf8434e95bffa670..66b0d3ca3dc7fb5673d83fcaab24abc59524c950 100755 --- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/build_atis_dataset.py +++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/build_atis_dataset.py @@ -12,7 +12,6 @@ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. - """build swda train dev test dataset""" import json @@ -23,11 +22,12 @@ import io import re -class ATIS(object): +class ATIS(object): """ nlu dataset atis data process """ - def __init__(self): + + def __init__(self): """ init instance """ @@ -41,91 +41,94 @@ class ATIS(object): self.map_tag_slot = "../../data/input/data/atis/atis_slot/map_tag_slot_id.txt" self.map_tag_intent = "../../data/input/data/atis/atis_intent/map_tag_intent_id.txt" - def _load_file(self, data_type): + def _load_file(self, data_type): """ load dataset filename """ slot_stat = os.path.exists(self.out_slot_dir) - if not slot_stat: + if not slot_stat: os.makedirs(self.out_slot_dir) intent_stat = os.path.exists(self.out_intent_dir) - if not intent_stat: + if not intent_stat: os.makedirs(self.out_intent_dir) src_examples = [] json_file = os.path.join(self.src_dir, "%s.json" % data_type) load_f = io.open(json_file, 'r', encoding="utf8") json_dict = json.load(load_f) examples = json_dict['rasa_nlu_data']['common_examples'] - for example in examples: + for example in examples: text = example.get('text') intent = example.get('intent') entities = example.get('entities') src_examples.append((text, intent, entities)) return src_examples - def _parser_intent_data(self, examples, data_type): + def _parser_intent_data(self, examples, data_type): """ parser intent dataset """ out_filename = "%s/%s.txt" % (self.out_intent_dir, data_type) fw = io.open(out_filename, 'w', encoding="utf8") - for example in examples: - if example[1] not in self.intent_dict: + for example in examples: + if example[1] not in self.intent_dict: self.intent_dict[example[1]] = self.intent_id self.intent_id += 1 - fw.write(u"%s\t%s\n" % (self.intent_dict[example[1]], example[0].lower())) + fw.write(u"%s\t%s\n" % + (self.intent_dict[example[1]], example[0].lower())) fw = io.open(self.map_tag_intent, 'w', encoding="utf8") - for tag in self.intent_dict: + for tag in self.intent_dict: fw.write(u"%s\t%s\n" % (tag, self.intent_dict[tag])) - def _parser_slot_data(self, examples, data_type): + def _parser_slot_data(self, examples, data_type): """ parser slot dataset """ out_filename = "%s/%s.txt" % (self.out_slot_dir, data_type) fw = io.open(out_filename, 'w', encoding="utf8") - for example in examples: + for example in examples: tags = [] text = example[0] entities = example[2] - if not entities: + if not entities: tags = [str(self.slot_dict['O'])] * len(text.strip().split()) continue - for i in range(len(entities)): + for i in range(len(entities)): enty = entities[i] start = enty['start'] value_num = len(enty['value'].split()) tags_slot = [] - for j in range(value_num): - if j == 0: + for j in range(value_num): + if j == 0: bround_tag = "B" - else: + else: bround_tag = "I" tag = "%s-%s" % (bround_tag, enty['entity']) - if tag not in self.slot_dict: + if tag not in self.slot_dict: self.slot_dict[tag] = self.slot_id self.slot_id += 1 tags_slot.append(str(self.slot_dict[tag])) - if i == 0: - if start not in [0, 1]: - prefix_num = len(text[: start].strip().split()) + if i == 0: + if start not in [0, 1]: + prefix_num = len(text[:start].strip().split()) tags.extend([str(self.slot_dict['O'])] * prefix_num) tags.extend(tags_slot) - else: - prefix_num = len(text[entities[i - 1]['end']: start].strip().split()) + else: + prefix_num = len(text[entities[i - 1]['end']:start].strip() + .split()) tags.extend([str(self.slot_dict['O'])] * prefix_num) tags.extend(tags_slot) - if entities[-1]['end'] < len(text): + if entities[-1]['end'] < len(text): suffix_num = len(text[entities[-1]['end']:].strip().split()) tags.extend([str(self.slot_dict['O'])] * suffix_num) - fw.write(u"%s\t%s\n" % (text.encode('utf8'), " ".join(tags).encode('utf8'))) - + fw.write(u"%s\t%s\n" % + (text.encode('utf8'), " ".join(tags).encode('utf8'))) + fw = io.open(self.map_tag_slot, 'w', encoding="utf8") - for slot in self.slot_dict: + for slot in self.slot_dict: fw.write(u"%s\t%s\n" % (slot, self.slot_dict[slot])) - def get_train_dataset(self): + def get_train_dataset(self): """ parser train dataset and print train.txt """ @@ -133,7 +136,7 @@ class ATIS(object): self._parser_intent_data(train_examples, "train") self._parser_slot_data(train_examples, "train") - def get_test_dataset(self): + def get_test_dataset(self): """ parser test dataset and print test.txt """ @@ -141,7 +144,7 @@ class ATIS(object): self._parser_intent_data(test_examples, "test") self._parser_slot_data(test_examples, "test") - def main(self): + def main(self): """ run data process """ @@ -149,10 +152,6 @@ class ATIS(object): self.get_test_dataset() -if __name__ == "__main__": +if __name__ == "__main__": atis_inst = ATIS() atis_inst.main() - - - - diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/build_dstc2_dataset.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/build_dstc2_dataset.py similarity index 75% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/build_dstc2_dataset.py rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/build_dstc2_dataset.py index 9655ce7268028ac5a30b843105de09c1f13a7b68..15e457deac46e44dba1671250922f1ad4599ad05 100755 --- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/build_dstc2_dataset.py +++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/build_dstc2_dataset.py @@ -24,11 +24,12 @@ import re import commonlib -class DSTC2(object): +class DSTC2(object): """ dialogue state tracking dstc2 data process """ - def __init__(self): + + def __init__(self): """ init instance """ @@ -42,16 +43,17 @@ class DSTC2(object): self._load_file() self._load_ontology() - def _load_file(self): + def _load_file(self): """ load dataset filename """ self.data_dict = commonlib.load_dict(self.data_list) - for data_type in self.data_dict: - for i in range(len(self.data_dict[data_type])): - self.data_dict[data_type][i] = os.path.join(self.src_dir, self.data_dict[data_type][i]) + for data_type in self.data_dict: + for i in range(len(self.data_dict[data_type])): + self.data_dict[data_type][i] = os.path.join( + self.src_dir, self.data_dict[data_type][i]) - def _load_ontology(self): + def _load_ontology(self): """ load ontology tag """ @@ -60,8 +62,8 @@ class DSTC2(object): fr = io.open(self.onto_json, 'r', encoding="utf8") ontology = json.load(fr) slots_values = ontology['informable'] - for slot in slots_values: - for value in slots_values[slot]: + for slot in slots_values: + for value in slots_values[slot]: key = "%s_%s" % (slot, value) self.map_tag_dict[key] = tag_id tag_id += 1 @@ -69,22 +71,22 @@ class DSTC2(object): self.map_tag_dict[key] = tag_id tag_id += 1 - def _parser_dataset(self, data_type): + def _parser_dataset(self, data_type): """ parser train dev test dataset """ stat = os.path.exists(self.out_dir) - if not stat: + if not stat: os.makedirs(self.out_dir) asr_stat = os.path.exists(self.out_asr_dir) - if not asr_stat: + if not asr_stat: os.makedirs(self.out_asr_dir) out_file = os.path.join(self.out_dir, "%s.txt" % data_type) out_asr_file = os.path.join(self.out_asr_dir, "%s.txt" % data_type) fw = io.open(out_file, 'w', encoding="utf8") fw_asr = io.open(out_asr_file, 'w', encoding="utf8") data_list = self.data_dict.get(data_type) - for fn in data_list: + for fn in data_list: log_file = os.path.join(fn, "log.json") label_file = os.path.join(fn, "label.json") f_log = io.open(log_file, 'r', encoding="utf8") @@ -93,49 +95,59 @@ class DSTC2(object): label_json = json.load(f_label) session_id = log_json['session-id'] assert len(label_json["turns"]) == len(log_json["turns"]) - for i in range(len(label_json["turns"])): + for i in range(len(label_json["turns"])): log_turn = log_json["turns"][i] label_turn = label_json["turns"][i] assert log_turn["turn-index"] == label_turn["turn-index"] - labels = ["%s_%s" % (slot, label_turn["goal-labels"][slot]) for slot in label_turn["goal-labels"]] - labels_ids = " ".join([str(self.map_tag_dict.get(label, self.map_tag_dict["%s_none" % label.split('_')[0]])) for label in labels]) + labels = [ + "%s_%s" % (slot, label_turn["goal-labels"][slot]) + for slot in label_turn["goal-labels"] + ] + labels_ids = " ".join([ + str( + self.map_tag_dict.get(label, self.map_tag_dict[ + "%s_none" % label.split('_')[0]])) + for label in labels + ]) mach = log_turn['output']['transcript'] user = label_turn['transcription'] - if not labels_ids.strip(): + if not labels_ids.strip(): labels_ids = self.map_tag_dict['none'] out = "%s\t%s\1%s\t%s" % (session_id, mach, user, labels_ids) - user_asr = log_turn['input']['live']['asr-hyps'][0]['asr-hyp'].strip() - out_asr = "%s\t%s\1%s\t%s" % (session_id, mach, user_asr, labels_ids) + user_asr = log_turn['input']['live']['asr-hyps'][0][ + 'asr-hyp'].strip() + out_asr = "%s\t%s\1%s\t%s" % (session_id, mach, user_asr, + labels_ids) fw.write(u"%s\n" % out.encode('utf8')) fw_asr.write(u"%s\n" % out_asr.encode('utf8')) - def get_train_dataset(self): + def get_train_dataset(self): """ parser train dataset and print train.txt """ self._parser_dataset("train") - def get_dev_dataset(self): + def get_dev_dataset(self): """ parser dev dataset and print dev.txt """ self._parser_dataset("dev") - def get_test_dataset(self): + def get_test_dataset(self): """ parser test dataset and print test.txt """ self._parser_dataset("test") - def get_labels(self): + def get_labels(self): """ get tag and map ids file """ fw = io.open(self.map_tag, 'w', encoding="utf8") - for elem in self.map_tag_dict: + for elem in self.map_tag_dict: fw.write(u"%s\t%s\n" % (elem, self.map_tag_dict[elem])) - def main(self): + def main(self): """ run data process """ @@ -144,10 +156,7 @@ class DSTC2(object): self.get_test_dataset() self.get_labels() -if __name__ == "__main__": + +if __name__ == "__main__": dstc_inst = DSTC2() dstc_inst.main() - - - - diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/build_mrda_dataset.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/build_mrda_dataset.py similarity index 81% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/build_mrda_dataset.py rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/build_mrda_dataset.py index e5c0406fce45637364e3dc8ed7cc2ed7739c15a1..8a6419d8ea4b631688c3361d0a96a4b44722682b 100755 --- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/build_mrda_dataset.py +++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/build_mrda_dataset.py @@ -23,11 +23,12 @@ import re import commonlib -class MRDA(object): +class MRDA(object): """ dialogue act dataset mrda data process """ - def __init__(self): + + def __init__(self): """ init instance """ @@ -41,7 +42,7 @@ class MRDA(object): self._load_file() self.tag_dict = commonlib.load_voc(self.voc_map_tag) - def _load_file(self): + def _load_file(self): """ load dataset filename """ @@ -49,30 +50,30 @@ class MRDA(object): self.trans_dict = {} self.data_dict = commonlib.load_dict(self.data_list) file_list, file_path = commonlib.get_file_list(self.src_dir) - for i in range(len(file_list)): + for i in range(len(file_list)): name = file_list[i] keyword = name.split('.')[0] - if 'dadb' in name: + if 'dadb' in name: self.dadb_dict[keyword] = file_path[i] - if 'trans' in name: + if 'trans' in name: self.trans_dict[keyword] = file_path[i] - def load_dadb(self, data_type): + def load_dadb(self, data_type): """ load dadb dataset """ dadb_dict = {} conv_id_list = [] dadb_list = self.data_dict[data_type] - for dadb_key in dadb_list: + for dadb_key in dadb_list: dadb_file = self.dadb_dict[dadb_key] fr = io.open(dadb_file, 'r', encoding="utf8") - row = csv.reader(fr, delimiter = ',') - for line in row: + row = csv.reader(fr, delimiter=',') + for line in row: elems = line conv_id = elems[2] conv_id_list.append(conv_id) - if len(elems) != 14: + if len(elems) != 14: continue error_code = elems[3] da_tag = elems[-9] @@ -80,17 +81,17 @@ class MRDA(object): dadb_dict[conv_id] = (error_code, da_ori_tag, da_tag) return dadb_dict, conv_id_list - def load_trans(self, data_type): + def load_trans(self, data_type): """load trans data""" trans_dict = {} trans_list = self.data_dict[data_type] - for trans_key in trans_list: + for trans_key in trans_list: trans_file = self.trans_dict[trans_key] fr = io.open(trans_file, 'r', encoding="utf8") - row = csv.reader(fr, delimiter = ',') - for line in row: + row = csv.reader(fr, delimiter=',') + for line in row: elems = line - if len(elems) != 3: + if len(elems) != 3: continue conv_id = elems[0] text = elems[1] @@ -98,7 +99,7 @@ class MRDA(object): trans_dict[conv_id] = (text, text_process) return trans_dict - def _parser_dataset(self, data_type): + def _parser_dataset(self, data_type): """ parser train dev test dataset """ @@ -106,50 +107,51 @@ class MRDA(object): dadb_dict, conv_id_list = self.load_dadb(data_type) trans_dict = self.load_trans(data_type) fw = io.open(out_filename, 'w', encoding="utf8") - for elem in conv_id_list: + for elem in conv_id_list: v_dadb = dadb_dict[elem] v_trans = trans_dict[elem] da_tag = v_dadb[2] - if da_tag not in self.tag_dict: + if da_tag not in self.tag_dict: continue tag = self.tag_dict[da_tag] - if tag == "Z": + if tag == "Z": continue - if tag not in self.map_tag_dict: + if tag not in self.map_tag_dict: self.map_tag_dict[tag] = self.tag_id self.tag_id += 1 caller = elem.split('_')[0].split('-')[-1] conv_no = elem.split('_')[0].split('-')[0] - out = "%s\t%s\t%s\t%s" % (conv_no, self.map_tag_dict[tag], caller, v_trans[0]) + out = "%s\t%s\t%s\t%s" % (conv_no, self.map_tag_dict[tag], caller, + v_trans[0]) fw.write(u"%s\n" % out) - def get_train_dataset(self): + def get_train_dataset(self): """ parser train dataset and print train.txt """ self._parser_dataset("train") - def get_dev_dataset(self): + def get_dev_dataset(self): """ parser dev dataset and print dev.txt """ self._parser_dataset("dev") - def get_test_dataset(self): + def get_test_dataset(self): """ parser test dataset and print test.txt """ self._parser_dataset("test") - def get_labels(self): + def get_labels(self): """ get tag and map ids file """ fw = io.open(self.map_tag, 'w', encoding="utf8") - for elem in self.map_tag_dict: + for elem in self.map_tag_dict: fw.write(u"%s\t%s\n" % (elem, self.map_tag_dict[elem])) - def main(self): + def main(self): """ run data process """ @@ -158,10 +160,7 @@ class MRDA(object): self.get_test_dataset() self.get_labels() -if __name__ == "__main__": + +if __name__ == "__main__": mrda_inst = MRDA() mrda_inst.main() - - - - diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/build_swda_dataset.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/build_swda_dataset.py similarity index 80% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/build_swda_dataset.py rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/build_swda_dataset.py index 441d2852c760e9cef31147e666855f89dba406bb..913c4ade38785612592e6ef21a7bea740601a09e 100755 --- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/build_swda_dataset.py +++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/build_swda_dataset.py @@ -23,11 +23,12 @@ import re import commonlib -class SWDA(object): +class SWDA(object): """ dialogue act dataset swda data process """ - def __init__(self): + + def __init__(self): """ init instance """ @@ -39,94 +40,94 @@ class SWDA(object): self.src_dir = "../../data/input/data/swda/source_data/swda" self._load_file() - def _load_file(self): + def _load_file(self): """ load dataset filename """ self.data_dict = commonlib.load_dict(self.data_list) self.file_dict = {} child_dir = commonlib.get_dir_list(self.src_dir) - for chd in child_dir: + for chd in child_dir: file_list, file_path = commonlib.get_file_list(chd) - for i in range(len(file_list)): - name = file_list[i] + for i in range(len(file_list)): + name = file_list[i] keyword = "sw%s" % name.split('.')[0].split('_')[-1] self.file_dict[keyword] = file_path[i] - def _parser_dataset(self, data_type): + def _parser_dataset(self, data_type): """ parser train dev test dataset """ out_filename = "%s/%s.txt" % (self.out_dir, data_type) fw = io.open(out_filename, 'w', encoding='utf8') - for name in self.data_dict[data_type]: + for name in self.data_dict[data_type]: file_path = self.file_dict[name] fr = io.open(file_path, 'r', encoding="utf8") idx = 0 - row = csv.reader(fr, delimiter = ',') - for r in row: - if idx == 0: + row = csv.reader(fr, delimiter=',') + for r in row: + if idx == 0: idx += 1 continue out = self._parser_utterence(r) fw.write(u"%s\n" % out) - def _clean_text(self, text): + def _clean_text(self, text): """ text cleaning for dialogue act dataset """ - if text.startswith('<') and text.endswith('>.'): + if text.startswith('<') and text.endswith('>.'): return text if "[" in text or "]" in text: stat = True - else: + else: stat = False group = re.findall("\[.*?\+.*?\]", text) - while group and stat: - for elem in group: + while group and stat: + for elem in group: elem_src = elem elem = re.sub('\+', '', elem.lstrip('[').rstrip(']')) text = text.replace(elem_src, elem) - if "[" in text or "]" in text: + if "[" in text or "]" in text: stat = True - else: + else: stat = False group = re.findall("\[.*?\+.*?\]", text) - if "{" in text or "}" in text: + if "{" in text or "}" in text: stat = True - else: + else: stat = False group = re.findall("{[A-Z].*?}", text) - while group and stat: + while group and stat: child_group = re.findall("{[A-Z]*(.*?)}", text) - for i in range(len(group)): + for i in range(len(group)): text = text.replace(group[i], child_group[i]) - if "{" in text or "}" in text: + if "{" in text or "}" in text: stat = True - else: + else: stat = False group = re.findall("{[A-Z].*?}", text) - if "(" in text or ")" in text: + if "(" in text or ")" in text: stat = True - else: + else: stat = False group = re.findall("\(\(.*?\)\)", text) - while group and stat: - for elem in group: - if elem: + while group and stat: + for elem in group: + if elem: elem_clean = re.sub("\(|\)", "", elem) text = text.replace(elem, elem_clean) - else: + else: text = text.replace(elem, "mumblex") if "(" in text or ")" in text: stat = True - else: + else: stat = False group = re.findall("\(\((.*?)\)\)", text) group = re.findall("\<.*?\>", text) - if group: - for elem in group: + if group: + for elem in group: text = text.replace(elem, "") text = re.sub(r" \'s", "\'s", text) @@ -137,24 +138,24 @@ class SWDA(object): text = re.sub("\[|\]|\+|\>|\<|\{|\}", "", text) return text.strip().lower() - def _map_tag(self, da_tag): + def _map_tag(self, da_tag): """ map tag to 42 classes """ curr_da_tags = [] curr_das = re.split(r"\s*[,;]\s*", da_tag) - for curr_da in curr_das: + for curr_da in curr_das: if curr_da == "qy_d" or curr_da == "qw^d" or curr_da == "b^m": pass elif curr_da == "nn^e": curr_da = "ng" elif curr_da == "ny^e": curr_da = "na" - else: + else: curr_da = re.sub(r'(.)\^.*', r'\1', curr_da) curr_da = re.sub(r'[\(\)@*]', '', curr_da) tag = curr_da - if tag in ('qr', 'qy'): + if tag in ('qr', 'qy'): tag = 'qy' elif tag in ('fe', 'ba'): tag = 'ba' @@ -170,12 +171,12 @@ class SWDA(object): tag = 'fo_o_fw_"_by_bc' curr_da = tag curr_da_tags.append(curr_da) - if curr_da_tags[0] not in self.map_tag_dict: + if curr_da_tags[0] not in self.map_tag_dict: self.map_tag_dict[curr_da_tags[0]] = self.tag_id self.tag_id += 1 return self.map_tag_dict[curr_da_tags[0]] - - def _parser_utterence(self, line): + + def _parser_utterence(self, line): """ parser one turn dialogue """ @@ -188,34 +189,34 @@ class SWDA(object): out = "%s\t%s\t%s\t%s" % (conversation_no, act_tag, caller, text) return out - - def get_train_dataset(self): + + def get_train_dataset(self): """ parser train dataset and print train.txt """ self._parser_dataset("train") - def get_dev_dataset(self): + def get_dev_dataset(self): """ parser dev dataset and print dev.txt """ self._parser_dataset("dev") - def get_test_dataset(self): + def get_test_dataset(self): """ parser test dataset and print test.txt """ self._parser_dataset("test") - def get_labels(self): + def get_labels(self): """ get tag and map ids file """ fw = io.open(self.map_tag, 'w', encoding='utf8') - for elem in self.map_tag_dict: + for elem in self.map_tag_dict: fw.write(u"%s\t%s\n" % (elem, self.map_tag_dict[elem])) - def main(self): + def main(self): """ run data process """ @@ -224,10 +225,7 @@ class SWDA(object): self.get_test_dataset() self.get_labels() -if __name__ == "__main__": + +if __name__ == "__main__": swda_inst = SWDA() swda_inst.main() - - - - diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/commonlib.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/commonlib.py similarity index 86% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/commonlib.py rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/commonlib.py index b223a9f2b4eb0b8c9e650759a7b39cc5282cdceb..fd07b4a710d1b000a4896cc3f34785d127ac606f 100755 --- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/commonlib.py +++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/commonlib.py @@ -25,52 +25,49 @@ def get_file_list(dir_name): file_list = list() file_path = list() for root, dirs, files in os.walk(dir_name): - for file in files: + for file in files: file_list.append(file) file_path.append(os.path.join(root, file)) return file_list, file_path -def get_dir_list(dir_name): +def get_dir_list(dir_name): """ get directory names """ child_dir = [] dir_list = os.listdir(dir_name) - for cur_file in dir_list: + for cur_file in dir_list: path = os.path.join(dir_name, cur_file) - if not os.path.isdir(path): + if not os.path.isdir(path): continue child_dir.append(path) return child_dir -def load_dict(conf): +def load_dict(conf): """ load swda dataset config """ conf_dict = dict() fr = io.open(conf, 'r', encoding="utf8") - for line in fr: + for line in fr: line = line.strip() elems = line.split('\t') - if elems[0] not in conf_dict: + if elems[0] not in conf_dict: conf_dict[elems[0]] = [] conf_dict[elems[0]].append(elems[1]) return conf_dict -def load_voc(conf): +def load_voc(conf): """ load map dict """ map_dict = {} fr = io.open(conf, 'r', encoding="utf8") - for line in fr: + for line in fr: line = line.strip() elems = line.split('\t') map_dict[elems[0]] = elems[1] return map_dict - - - diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/conf/dstc2.conf b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/conf/dstc2.conf similarity index 100% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/conf/dstc2.conf rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/conf/dstc2.conf diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/conf/mrda.conf b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/conf/mrda.conf similarity index 100% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/conf/mrda.conf rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/conf/mrda.conf diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/conf/multi-woz.conf b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/conf/multi-woz.conf similarity index 100% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/conf/multi-woz.conf rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/conf/multi-woz.conf diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/conf/swda.conf b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/conf/swda.conf similarity index 100% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/conf/swda.conf rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/conf/swda.conf diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/run_build_data.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/run_build_data.py similarity index 80% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/run_build_data.py rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/run_build_data.py index b1a61a0f9938bd7fb647194f3902ae62d5c6b509..273a39513e7550d5fa507d854a0e6611c7f1b16f 100755 --- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/scripts/run_build_data.py +++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/scripts/run_build_data.py @@ -20,29 +20,29 @@ from build_dstc2_dataset import DSTC2 from build_mrda_dataset import MRDA from build_swda_dataset import SWDA - -if __name__ == "__main__": +if __name__ == "__main__": task_name = sys.argv[1] task_name = task_name.lower() - - if task_name not in ['swda', 'mrda', 'atis', 'dstc2', 'udc']: + + if task_name not in ['swda', 'mrda', 'atis', 'dstc2', 'udc']: print("task name error: we support [swda|mrda|atis|dstc2|udc]") exit(1) - - if task_name == 'swda': + + if task_name == 'swda': swda_inst = SWDA() swda_inst.main() - elif task_name == 'mrda': + elif task_name == 'mrda': mrda_inst = MRDA() mrda_inst.main() - elif task_name == 'atis': + elif task_name == 'atis': atis_inst = ATIS() atis_inst.main() - shutil.copyfile("../../data/input/data/atis/atis_slot/test.txt", "../../data/input/data/atis/atis_slot/dev.txt") - shutil.copyfile("../../data/input/data/atis/atis_intent/test.txt", "../../data/input/data/atis/atis_intent/dev.txt") - elif task_name == 'dstc2': + shutil.copyfile("../../data/input/data/atis/atis_slot/test.txt", + "../../data/input/data/atis/atis_slot/dev.txt") + shutil.copyfile("../../data/input/data/atis/atis_intent/test.txt", + "../../data/input/data/atis/atis_intent/dev.txt") + elif task_name == 'dstc2': dstc_inst = DSTC2() dstc_inst.main() - else: + else: exit(0) - diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/tokenization.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/tokenization.py similarity index 99% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/tokenization.py rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/tokenization.py index 8268f8e8ec6f86513344c1523dcd703abeb61c44..175979178a18f9635a4b206d695f2a7547c66786 100644 --- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/tokenization.py +++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/tokenization.py @@ -12,7 +12,6 @@ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. - """Tokenization classes.""" from __future__ import absolute_import diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/transformer_encoder.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/transformer_encoder.py similarity index 99% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/transformer_encoder.py rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/transformer_encoder.py index beb6cccf1143bbbf287dc88f4d121a9bcc7b1cf8..7bdb32c6ab054a46296a9eb14a75bdc0763fbeea 100644 --- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/transformer_encoder.py +++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/transformer_encoder.py @@ -113,7 +113,7 @@ def multi_head_attention(queries, """ Scaled Dot-Product Attention """ - scaled_q = layers.scale(x=q, scale=d_key ** -0.5) + scaled_q = layers.scale(x=q, scale=d_key**-0.5) product = layers.matmul(x=scaled_q, y=k, transpose_y=True) if attn_bias: product += attn_bias diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/utils/__init__.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/utils/__init__.py similarity index 100% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/utils/__init__.py rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/utils/__init__.py diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/utils/configure.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/utils/configure.py similarity index 100% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/utils/configure.py rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/utils/configure.py diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/utils/fp16.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/utils/fp16.py similarity index 100% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/utils/fp16.py rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/utils/fp16.py diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/utils/input_field.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/utils/input_field.py similarity index 94% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/utils/input_field.py rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/utils/input_field.py index 28a68854d4de68d0d765f1f12454d8817d9eedbf..c36f74566048a6bfe942922f0baa28f2dd421f3a 100644 --- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/utils/input_field.py +++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/utils/input_field.py @@ -25,8 +25,8 @@ import numpy as np import paddle.fluid as fluid -class InputField(object): - def __init__(self, input_field): +class InputField(object): + def __init__(self, input_field): """init inpit field""" self.src_ids = input_field[0] self.pos_ids = input_field[1] diff --git a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/utils/model_check.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/utils/model_check.py similarity index 99% rename from PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/utils/model_check.py rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/utils/model_check.py index 013130cbb0a9f4d44edc28589bf83672b69abf26..dacf1a668238a22d94fbefb9f52b05620581cad2 100644 --- a/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation/ade/utils/model_check.py +++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/utils/model_check.py @@ -30,7 +30,7 @@ def check_cuda(use_cuda, err = \ if __name__ == "__main__": - + check_cuda(True) check_cuda(False) diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/utils/py23.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/utils/py23.py similarity index 100% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/utils/py23.py rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/utils/py23.py diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/utils/save_load_io.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/utils/save_load_io.py similarity index 98% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/utils/save_load_io.py rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/utils/save_load_io.py index 7fb9adc6858c86c82d0b9e22dff7a714b642531c..bdcdd811dd1ebb27542e5facc3ba050f00df08f8 100644 --- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu/utils/save_load_io.py +++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu/utils/save_load_io.py @@ -69,8 +69,8 @@ def init_from_checkpoint(args, exe, program): def init_from_params(args, exe, program): assert isinstance(args.init_from_params, str) - - if not os.path.exists(args.init_from_params): + + if not os.path.exists(args.init_from_params): raise Warning("the params path does not exist.") return False @@ -113,7 +113,7 @@ def save_param(args, exe, program, dirname): if not os.path.exists(param_dir): os.makedirs(param_dir) - + fluid.io.save_params( exe, os.path.join(param_dir, dirname), @@ -122,5 +122,3 @@ def save_param(args, exe, program, dirname): print("save parameters at %s" % (os.path.join(param_dir, dirname))) return True - - diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu_net.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu_net.py similarity index 79% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu_net.py rename to PaddleNLP/dialogue_system/dialogue_general_understanding/dgu_net.py index 0634e0e3fed653b20f28774c582c3a47b8103dc9..2b6215df42a3b47517050e4c98d571dc3f8d5f1d 100644 --- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/dgu_net.py +++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/dgu_net.py @@ -23,14 +23,9 @@ from dgu.bert import BertModel from dgu.utils.configure import JsonConfig -def create_net( - is_training, - model_input, - num_labels, - paradigm_inst, - args): +def create_net(is_training, model_input, num_labels, paradigm_inst, args): """create dialogue task model""" - + src_ids = model_input.src_ids pos_ids = model_input.pos_ids sent_ids = model_input.sent_ids @@ -48,14 +43,15 @@ def create_net( config=bert_conf, use_fp16=False) - params = {'num_labels': num_labels, - 'src_ids': src_ids, - 'pos_ids': pos_ids, - 'sent_ids': sent_ids, - 'input_mask': input_mask, - 'labels': labels, - 'is_training': is_training} + params = { + 'num_labels': num_labels, + 'src_ids': src_ids, + 'pos_ids': pos_ids, + 'sent_ids': sent_ids, + 'input_mask': input_mask, + 'labels': labels, + 'is_training': is_training + } results = paradigm_inst.paradigm(bert, params) return results - diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/eval.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/eval.py similarity index 94% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/eval.py rename to PaddleNLP/dialogue_system/dialogue_general_understanding/eval.py index 94d7be51f7dc76638bdc155d8cd516f5decb59f3..73c044750f655d0cdc1e23ccdb1ece44d65c5f3d 100644 --- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/eval.py +++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/eval.py @@ -20,17 +20,17 @@ from dgu.evaluation import evaluate from dgu.utils.configure import PDConfig -def do_eval(args): +def do_eval(args): task_name = args.task_name.lower() reference = args.evaluation_file predicitions = args.output_prediction_file - + evaluate(task_name, predicitions, reference) -if __name__ == "__main__": - +if __name__ == "__main__": + args = PDConfig(yaml_file="./data/config/dgu.yaml") args.build() diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/images/dgu.png b/PaddleNLP/dialogue_system/dialogue_general_understanding/images/dgu.png similarity index 100% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/images/dgu.png rename to PaddleNLP/dialogue_system/dialogue_general_understanding/images/dgu.png diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/inference_model.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/inference_model.py similarity index 67% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/inference_model.py rename to PaddleNLP/dialogue_system/dialogue_general_understanding/inference_model.py index 01e6d96132998b36a3a3db55289cbe9e202e831c..438ebcc730f57ea27a5821b3aa873c4a0a413dba 100644 --- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/inference_model.py +++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/inference_model.py @@ -29,10 +29,10 @@ import dgu.utils.save_load_io as save_load_io import dgu.reader as reader from dgu_net import create_net -import dgu.define_paradigm as define_paradigm +import dgu.define_paradigm as define_paradigm -def do_save_inference_model(args): +def do_save_inference_model(args): """save inference model function""" task_name = args.task_name.lower() @@ -57,35 +57,36 @@ def do_save_inference_model(args): with fluid.unique_name.guard(): # define inputs of the network - num_labels = len(processors[task_name].get_labels()) + num_labels = len(processors[task_name].get_labels()) src_ids = fluid.data( - name='src_ids', shape=[-1, args.max_seq_len], dtype='int64') + name='src_ids', shape=[-1, args.max_seq_len], dtype='int64') pos_ids = fluid.data( - name='pos_ids', shape=[-1, args.max_seq_len], dtype='int64') + name='pos_ids', shape=[-1, args.max_seq_len], dtype='int64') sent_ids = fluid.data( - name='sent_ids', shape=[-1, args.max_seq_len], dtype='int64') + name='sent_ids', shape=[-1, args.max_seq_len], dtype='int64') input_mask = fluid.data( - name='input_mask', shape=[-1, args.max_seq_len], dtype='float32') - if args.task_name == 'atis_slot': + name='input_mask', + shape=[-1, args.max_seq_len], + dtype='float32') + if args.task_name == 'atis_slot': labels = fluid.data( - name='labels', shape=[-1, args.max_seq_len], dtype='int64') + name='labels', shape=[-1, args.max_seq_len], dtype='int64') elif args.task_name in ['dstc2', 'dstc2_asr', 'multi-woz']: labels = fluid.data( - name='labels', shape=[-1, num_labels], dtype='int64') - else: - labels = fluid.data( - name='labels', shape=[-1, 1], dtype='int64') - + name='labels', shape=[-1, num_labels], dtype='int64') + else: + labels = fluid.data(name='labels', shape=[-1, 1], dtype='int64') + input_inst = [src_ids, pos_ids, sent_ids, input_mask, labels] input_field = InputField(input_inst) - + results = create_net( - is_training=False, - model_input=input_field, - num_labels=num_labels, - paradigm_inst=paradigm_inst, - args=args) + is_training=False, + model_input=input_field, + num_labels=num_labels, + paradigm_inst=paradigm_inst, + args=args) probs = results.get("probs", None) if args.use_cuda: @@ -97,7 +98,7 @@ def do_save_inference_model(args): exe.run(startup_prog) assert (args.init_from_params) or (args.init_from_pretrain_model) - + if args.init_from_params: save_load_io.init_from_params(args, exe, test_prog) elif args.init_from_pretrain_model: @@ -105,20 +106,16 @@ def do_save_inference_model(args): # saving inference model fluid.io.save_inference_model( - args.inference_model_dir, - feeded_var_names=[ - input_field.src_ids.name, - input_field.pos_ids.name, - input_field.sent_ids.name, - input_field.input_mask.name - ], - target_vars=[ - probs - ], - executor=exe, - main_program=test_prog, - model_filename="model.pdmodel", - params_filename="params.pdparams") + args.inference_model_dir, + feeded_var_names=[ + input_field.src_ids.name, input_field.pos_ids.name, + input_field.sent_ids.name, input_field.input_mask.name + ], + target_vars=[probs], + executor=exe, + main_program=test_prog, + model_filename="model.pdmodel", + params_filename="params.pdparams") print("save inference model at %s" % (args.inference_model_dir)) diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/main.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/main.py similarity index 99% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/main.py rename to PaddleNLP/dialogue_system/dialogue_general_understanding/main.py index 0b7b43f3c7daaf350319ee2bca0ba1de1d2e17de..bf1cf3b26646eb03b74f857ec2323a4d4ea33f0b 100644 --- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/main.py +++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/main.py @@ -26,7 +26,6 @@ from inference_model import do_save_inference_model from dgu.utils.configure import PDConfig - if __name__ == "__main__": args = PDConfig(yaml_file="./data/config/dgu.yaml") diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/predict.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/predict.py similarity index 100% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/predict.py rename to PaddleNLP/dialogue_system/dialogue_general_understanding/predict.py diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/run.sh b/PaddleNLP/dialogue_system/dialogue_general_understanding/run.sh similarity index 100% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/run.sh rename to PaddleNLP/dialogue_system/dialogue_general_understanding/run.sh diff --git a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/train.py b/PaddleNLP/dialogue_system/dialogue_general_understanding/train.py similarity index 74% rename from PaddleNLP/PaddleDialogue/dialogue_general_understanding/train.py rename to PaddleNLP/dialogue_system/dialogue_general_understanding/train.py index 7401cee810cdf25408780d7cc6bfa059a0df5e4a..2ea2a0395a26400ac29d1adadf47f7cea2d9ec32 100644 --- a/PaddleNLP/PaddleDialogue/dialogue_general_understanding/train.py +++ b/PaddleNLP/dialogue_system/dialogue_general_understanding/train.py @@ -28,7 +28,7 @@ import paddle.fluid as fluid from dgu_net import create_net import dgu.reader as reader from dgu.optimization import optimization -import dgu.define_paradigm as define_paradigm +import dgu.define_paradigm as define_paradigm from dgu.utils.configure import PDConfig from dgu.utils.input_field import InputField from dgu.utils.model_check import check_cuda @@ -37,7 +37,7 @@ import dgu.utils.save_load_io as save_load_io def do_train(args): """train function""" - + task_name = args.task_name.lower() paradigm_inst = define_paradigm.Paradigm(task_name) @@ -53,34 +53,35 @@ def do_train(args): train_prog = fluid.default_main_program() startup_prog = fluid.default_startup_program() - with fluid.program_guard(train_prog, startup_prog): + with fluid.program_guard(train_prog, startup_prog): train_prog.random_seed = args.random_seed startup_prog.random_seed = args.random_seed - with fluid.unique_name.guard(): + with fluid.unique_name.guard(): num_labels = len(processors[task_name].get_labels()) src_ids = fluid.data( - name='src_ids', shape=[-1, args.max_seq_len], dtype='int64') + name='src_ids', shape=[-1, args.max_seq_len], dtype='int64') pos_ids = fluid.data( - name='pos_ids', shape=[-1, args.max_seq_len], dtype='int64') + name='pos_ids', shape=[-1, args.max_seq_len], dtype='int64') sent_ids = fluid.data( - name='sent_ids', shape=[-1, args.max_seq_len], dtype='int64') + name='sent_ids', shape=[-1, args.max_seq_len], dtype='int64') input_mask = fluid.data( - name='input_mask', shape=[-1, args.max_seq_len], dtype='float32') - if args.task_name == 'atis_slot': + name='input_mask', + shape=[-1, args.max_seq_len], + dtype='float32') + if args.task_name == 'atis_slot': labels = fluid.data( - name='labels', shape=[-1, args.max_seq_len], dtype='int64') + name='labels', shape=[-1, args.max_seq_len], dtype='int64') elif args.task_name in ['dstc2']: labels = fluid.data( - name='labels', shape=[-1, num_labels], dtype='int64') - else: - labels = fluid.data( - name='labels', shape=[-1, 1], dtype='int64') - + name='labels', shape=[-1, num_labels], dtype='int64') + else: + labels = fluid.data(name='labels', shape=[-1, 1], dtype='int64') + input_inst = [src_ids, pos_ids, sent_ids, input_mask, labels] input_field = InputField(input_inst) - data_reader = fluid.io.PyReader(feed_list=input_inst, - capacity=4, iterable=False) + data_reader = fluid.io.PyReader( + feed_list=input_inst, capacity=4, iterable=False) processor = processors[task_name](data_dir=args.data_dir, vocab_path=args.vocab_path, max_seq_len=args.max_seq_len, @@ -90,12 +91,12 @@ def do_train(args): random_seed=args.random_seed) results = create_net( - is_training=True, - model_input=input_field, - num_labels=num_labels, - paradigm_inst=paradigm_inst, - args=args) - + is_training=True, + model_input=input_field, + num_labels=num_labels, + paradigm_inst=paradigm_inst, + args=args) + loss = results.get("loss", None) probs = results.get("probs", None) accuracy = results.get("accuracy", None) @@ -103,21 +104,19 @@ def do_train(args): loss.persistable = True probs.persistable = True - if accuracy: + if accuracy: accuracy.persistable = True num_seqs.persistable = True - if args.use_cuda: + if args.use_cuda: dev_count = fluid.core.get_cuda_device_count() - else: + else: dev_count = int(os.environ.get('CPU_NUM', 1)) - + batch_generator = processor.data_generator( - batch_size=args.batch_size, - phase='train', - shuffle=True) + batch_size=args.batch_size, phase='train', shuffle=True) num_train_examples = processor.get_num_examples(phase='train') - + if args.in_tokens: max_train_steps = args.epoch * num_train_examples // ( args.batch_size // args.max_seq_len) // dev_count @@ -147,32 +146,32 @@ def do_train(args): place = fluid.CUDAPlace(int(os.getenv('FLAGS_selected_gpus', '0'))) else: place = fluid.CPUPlace() - + exe = fluid.Executor(place) exe.run(startup_prog) assert (args.init_from_checkpoint == "") or ( - args.init_from_pretrain_model == "") + args.init_from_pretrain_model == "") # init from some checkpoint, to resume the previous training - if args.init_from_checkpoint: + if args.init_from_checkpoint: save_load_io.init_from_checkpoint(args, exe, train_prog) - + # init from some pretrain models, to better solve the current task - if args.init_from_pretrain_model: + if args.init_from_pretrain_model: save_load_io.init_from_pretrain_model(args, exe, train_prog) build_strategy = fluid.compiler.BuildStrategy() build_strategy.enable_inplace = True compiled_train_prog = fluid.CompiledProgram(train_prog).with_data_parallel( - loss_name=loss.name, build_strategy=build_strategy) - + loss_name=loss.name, build_strategy=build_strategy) + # start training steps = 0 time_begin = time.time() ce_info = [] - for epoch_step in range(args.epoch): + for epoch_step in range(args.epoch): data_reader.start() while True: try: @@ -216,43 +215,38 @@ def do_train(args): used_time = time_end - time_begin current_time = time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time())) - if accuracy is not None: - print( - "%s epoch: %d, step: %d, ave loss: %f, " - "ave acc: %f, speed: %f steps/s" % - (current_time, epoch_step, steps, - np.mean(np_loss), - np.mean(np_acc), - args.print_steps / used_time)) + if accuracy is not None: + print("%s epoch: %d, step: %d, ave loss: %f, " + "ave acc: %f, speed: %f steps/s" % + (current_time, epoch_step, steps, + np.mean(np_loss), np.mean(np_acc), + args.print_steps / used_time)) ce_info.append([ - np.mean(np_loss), - np.mean(np_acc), + np.mean(np_loss), np.mean(np_acc), args.print_steps / used_time ]) else: - print( - "%s epoch: %d, step: %d, ave loss: %f, " - "speed: %f steps/s" % - (current_time, epoch_step, steps, - np.mean(np_loss), - args.print_steps / used_time)) - ce_info.append([ - np.mean(np_loss), - args.print_steps / used_time - ]) + print("%s epoch: %d, step: %d, ave loss: %f, " + "speed: %f steps/s" % + (current_time, epoch_step, steps, + np.mean(np_loss), args.print_steps / used_time)) + ce_info.append( + [np.mean(np_loss), args.print_steps / used_time]) time_begin = time.time() - if steps % args.save_steps == 0: + if steps % args.save_steps == 0: save_path = "step_" + str(steps) - if args.save_checkpoint: - save_load_io.save_checkpoint(args, exe, train_prog, save_path) + if args.save_checkpoint: + save_load_io.save_checkpoint(args, exe, train_prog, + save_path) if args.save_param: - save_load_io.save_param(args, exe, train_prog, save_path) - - except fluid.core.EOFException: + save_load_io.save_param(args, exe, train_prog, + save_path) + + except fluid.core.EOFException: data_reader.reset() break - if args.save_checkpoint: + if args.save_checkpoint: save_load_io.save_checkpoint(args, exe, train_prog, "step_final") if args.save_param: save_load_io.save_param(args, exe, train_prog, "step_final") @@ -264,7 +258,7 @@ def do_train(args): if cards != '': num = len(cards.split(",")) return num - + if args.enable_ce: card_num = get_cards() print("test_card_num", card_num) @@ -283,8 +277,8 @@ def do_train(args): print("kpis\ttrain_acc_%s_card%s\t%f" % (task_name, card_num, ce_acc)) -if __name__ == '__main__': - +if __name__ == '__main__': + args = PDConfig(yaml_file="./data/config/dgu.yaml") args.build() args.Print() diff --git a/PaddleNLP/emotion_detection/inference_model.py b/PaddleNLP/emotion_detection/inference_model.py index 64f4815c35b6e9a2e4fb78be6cf0587a2464f513..0cc4b503c641e9cae7823fc2456c7534f3a21852 100644 --- a/PaddleNLP/emotion_detection/inference_model.py +++ b/PaddleNLP/emotion_detection/inference_model.py @@ -19,8 +19,7 @@ from __future__ import print_function import os import sys -sys.path.append("../") - +sys.path.append("../shared_modules/") import paddle import paddle.fluid as fluid import numpy as np diff --git a/PaddleNLP/emotion_detection/run_classifier.py b/PaddleNLP/emotion_detection/run_classifier.py index 8c93d210f3be1decc5ed20d1612620a56b8641c5..a5dd0ebb8dd711d38d81e55ce31221f2e967fb29 100644 --- a/PaddleNLP/emotion_detection/run_classifier.py +++ b/PaddleNLP/emotion_detection/run_classifier.py @@ -23,7 +23,7 @@ import os import time import multiprocessing import sys -sys.path.append("../") +sys.path.append("../shared_modules/") import paddle import paddle.fluid as fluid diff --git a/PaddleNLP/emotion_detection/run_ernie_classifier.py b/PaddleNLP/emotion_detection/run_ernie_classifier.py index fe2aa4b792db6f8aaa92c7e814b4f4e510028c3d..0f929292452c56404199e77962dec0d77f64f12d 100644 --- a/PaddleNLP/emotion_detection/run_ernie_classifier.py +++ b/PaddleNLP/emotion_detection/run_ernie_classifier.py @@ -24,7 +24,7 @@ import time import argparse import multiprocessing import sys -sys.path.append("../") +sys.path.append("../shared_modules/") import paddle import paddle.fluid as fluid diff --git a/PaddleNLP/language_model/train.py b/PaddleNLP/language_model/train.py index 5a6773c217930568f22c7565b13ffb43fbb0a25d..9a5af4b7be9869d0e2a09d9959f82947279900b1 100644 --- a/PaddleNLP/language_model/train.py +++ b/PaddleNLP/language_model/train.py @@ -36,7 +36,7 @@ import sys if sys.version[0] == '2': reload(sys) sys.setdefaultencoding("utf-8") -sys.path.append('../') +sys.path.append('../shared_modules/') import os os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3" diff --git a/PaddleNLP/lexical_analysis/creator.py b/PaddleNLP/lexical_analysis/creator.py index 9f84c192d536f4ddf0a35affcee290b16dc6e23b..bc8e492f1b66e4474b626aa3dbfa767889f73e1d 100644 --- a/PaddleNLP/lexical_analysis/creator.py +++ b/PaddleNLP/lexical_analysis/creator.py @@ -26,7 +26,7 @@ from paddle.fluid.initializer import NormalInitializer from reader import Dataset from ernie_reader import SequenceLabelReader -sys.path.append("..") +sys.path.append("../shared_modules/") from models.sequence_labeling import nets from models.representation.ernie import ernie_encoder, ernie_pyreader @@ -35,7 +35,8 @@ def create_model(args, vocab_size, num_labels, mode='train'): """create lac model""" # model's input data - words = fluid.data(name='words', shape=[None, 1], dtype='int64', lod_level=1) + words = fluid.data( + name='words', shape=[None, 1], dtype='int64', lod_level=1) targets = fluid.data( name='targets', shape=[None, 1], dtype='int64', lod_level=1) @@ -88,7 +89,8 @@ def create_pyreader(args, return_reader=False, mode='train'): # init reader - device_count = len(fluid.cuda_places()) if args.use_cuda else len(fluid.cpu_places()) + device_count = len(fluid.cuda_places()) if args.use_cuda else len( + fluid.cpu_places()) if model == 'lac': pyreader = fluid.io.DataLoader.from_generator( @@ -107,14 +109,14 @@ def create_pyreader(args, fluid.io.shuffle( reader.file_reader(file_name), buf_size=args.traindata_shuffle_buffer), - batch_size=args.batch_size/device_count), + batch_size=args.batch_size / device_count), places=place) else: pyreader.set_sample_list_generator( fluid.io.batch( reader.file_reader( file_name, mode=mode), - batch_size=args.batch_size/device_count), + batch_size=args.batch_size / device_count), places=place) elif model == 'ernie': diff --git a/PaddleNLP/lexical_analysis/ernie_reader.py b/PaddleNLP/lexical_analysis/ernie_reader.py index 52a2dd533c7467043b718a630f951ca33a6e9402..53e74f72285130ad2fbd82fd3ee286c9c799a149 100644 --- a/PaddleNLP/lexical_analysis/ernie_reader.py +++ b/PaddleNLP/lexical_analysis/ernie_reader.py @@ -20,7 +20,7 @@ import sys from collections import namedtuple import numpy as np -sys.path.append("..") +sys.path.append("../shared_modules/") from preprocess.ernie.task_reader import BaseReader, tokenization diff --git a/PaddleNLP/lexical_analysis/eval.py b/PaddleNLP/lexical_analysis/eval.py index 3b96d0c7e1258313709d69bd1e1ba02b25ccb33f..c3e28b1314ba3f55555e2d97ea2b5f488dd6d422 100644 --- a/PaddleNLP/lexical_analysis/eval.py +++ b/PaddleNLP/lexical_analysis/eval.py @@ -24,7 +24,7 @@ import paddle import utils import reader import creator -sys.path.append('../models/') +sys.path.append('../shared_modules/models/') from model_check import check_cuda from model_check import check_version diff --git a/PaddleNLP/lexical_analysis/inference_model.py b/PaddleNLP/lexical_analysis/inference_model.py index 89075723fe9477d0c153c82b1d87d7377c3f7258..29d078adcbaae80f43f761de83f83df481c5b019 100644 --- a/PaddleNLP/lexical_analysis/inference_model.py +++ b/PaddleNLP/lexical_analysis/inference_model.py @@ -10,7 +10,7 @@ import paddle.fluid as fluid import creator import reader import utils -sys.path.append('../models/') +sys.path.append('../shared_modules/models/') from model_check import check_cuda from model_check import check_version diff --git a/PaddleNLP/lexical_analysis/predict.py b/PaddleNLP/lexical_analysis/predict.py index d3ed22ac7cb9f4f45d1cb1350f13031c62b54d4c..3b2d2597d0cec875a854362277a47ba13651f874 100644 --- a/PaddleNLP/lexical_analysis/predict.py +++ b/PaddleNLP/lexical_analysis/predict.py @@ -24,7 +24,7 @@ import paddle import utils import reader import creator -sys.path.append('../models/') +sys.path.append('../shared_modules/models/') from model_check import check_cuda from model_check import check_version diff --git a/PaddleNLP/lexical_analysis/run_ernie_sequence_labeling.py b/PaddleNLP/lexical_analysis/run_ernie_sequence_labeling.py index 1b809a28526fe2940dadef355551c63394aeac65..d716a8132784cf42cb50ce53f980a4c57c0ec785 100644 --- a/PaddleNLP/lexical_analysis/run_ernie_sequence_labeling.py +++ b/PaddleNLP/lexical_analysis/run_ernie_sequence_labeling.py @@ -34,7 +34,7 @@ import paddle.fluid as fluid import creator import utils -sys.path.append("..") +sys.path.append("../shared_modules/") from models.representation.ernie import ErnieConfig from models.model_check import check_cuda from models.model_check import check_version @@ -187,8 +187,8 @@ def do_train(args): end_time - start_time, train_pyreader.queue.size())) if steps % args.save_steps == 0: - save_path = os.path.join(args.model_save_dir, "step_" + str(steps), - "checkpoint") + save_path = os.path.join(args.model_save_dir, + "step_" + str(steps), "checkpoint") print("\tsaving model as %s" % (save_path)) fluid.save(train_program, save_path) @@ -196,9 +196,10 @@ def do_train(args): evaluate(exe, test_program, test_pyreader, train_ret) save_path = os.path.join(args.model_save_dir, "step_" + str(steps), - "checkpoint") + "checkpoint") fluid.save(train_program, save_path) + def do_eval(args): # init executor if args.use_cuda: diff --git a/PaddleNLP/lexical_analysis/train.py b/PaddleNLP/lexical_analysis/train.py index 5cc28987bb603e56fc33dd429eeab48bdbadf991..aa1db213168b422d2e288471243b01e8440b6432 100644 --- a/PaddleNLP/lexical_analysis/train.py +++ b/PaddleNLP/lexical_analysis/train.py @@ -29,7 +29,7 @@ import reader import utils import creator from eval import test_process -sys.path.append('../models/') +sys.path.append('../shared_modules/models/') from model_check import check_cuda from model_check import check_version @@ -151,8 +151,7 @@ def do_train(args): # save checkpoints if step % args.save_steps == 0 and step != 0: save_path = os.path.join(args.model_save_dir, - "step_" + str(step), - "checkpoint") + "step_" + str(step), "checkpoint") fluid.save(train_program, save_path) step += 1 diff --git a/PaddleNLP/PaddleMRC/README.md b/PaddleNLP/machine_reading_comprehension/README.md similarity index 97% rename from PaddleNLP/PaddleMRC/README.md rename to PaddleNLP/machine_reading_comprehension/README.md index cf565b5095220229b71bae9e8efe7b6ea119e13d..ef4d89c9b13bf3eb115e1bda59d0f330fd59209a 100644 --- a/PaddleNLP/PaddleMRC/README.md +++ b/PaddleNLP/machine_reading_comprehension/README.md @@ -14,12 +14,12 @@ DuReader是一个大规模、面向真实应用、由人类生成的中文阅读 - 答案由人类生成 - 面向真实应用场景 - 标注更加丰富细致 - + 更多关于DuReader数据集的详细信息可在[DuReader官网](https://ai.baidu.com//broad/subordinate?dataset=dureader)找到。 ### DuReader基线系统 -DuReader基线系统利用[PaddlePaddle](http://paddlepaddle.org)深度学习框架,针对**DuReader阅读理解数据集**实现并升级了一个经典的阅读理解模型 —— BiDAF. +DuReader基线系统利用[PaddlePaddle](http://paddlepaddle.org)深度学习框架,针对**DuReader阅读理解数据集**实现并升级了一个经典的阅读理解模型 —— BiDAF. ## [KT-Net](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/Research/ACL2019-KTNET) @@ -30,7 +30,7 @@ KT-NET是百度NLP提出的具有开创性意义的语言表示与知识表示 - 被ACL 2019录用为长文 ([文章链接](https://www.aclweb.org/anthology/P19-1226/)) 此外,KT-NET具备很强的通用性,不仅适用于机器阅读理解任务,对其他形式的语言理解任务,如自然语言推断、复述识别、语义相似度判断等均有帮助。 - + ## [D-NET](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/Research/MRQA2019-D-NET) D-NET是一个以提升**阅读理解模型泛化能力**为目标的“预训练-微调”框架。D-NET的特点包括: @@ -39,4 +39,3 @@ D-NET是一个以提升**阅读理解模型泛化能力**为目标的“预训 - 在微调阶段引入多任务、多领域的学习策略 (基于[PALM](https://github.com/PaddlePaddle/PALM)多任务学习框架),有效的提升了模型在不同领域的泛化能力 百度利用D-NET框架在EMNLP 2019 [MRQA](https://mrqa.github.io/shared)国际阅读理解评测中以超过第二名近两个百分点的成绩夺得冠军,同时,在全部12个测试数据集中的10个排名第一。 - diff --git a/PaddleNLP/PaddleMT/transformer/.run_ce.sh b/PaddleNLP/machine_translation/transformer/.run_ce.sh similarity index 100% rename from PaddleNLP/PaddleMT/transformer/.run_ce.sh rename to PaddleNLP/machine_translation/transformer/.run_ce.sh diff --git a/PaddleNLP/PaddleMT/transformer/README.md b/PaddleNLP/machine_translation/transformer/README.md similarity index 100% rename from PaddleNLP/PaddleMT/transformer/README.md rename to PaddleNLP/machine_translation/transformer/README.md diff --git a/PaddleNLP/PaddleLARK/BERT/__init__.py b/PaddleNLP/machine_translation/transformer/__init__.py similarity index 100% rename from PaddleNLP/PaddleLARK/BERT/__init__.py rename to PaddleNLP/machine_translation/transformer/__init__.py diff --git a/PaddleNLP/PaddleMT/transformer/_ce.py b/PaddleNLP/machine_translation/transformer/_ce.py similarity index 100% rename from PaddleNLP/PaddleMT/transformer/_ce.py rename to PaddleNLP/machine_translation/transformer/_ce.py diff --git a/PaddleNLP/PaddleMT/transformer/desc.py b/PaddleNLP/machine_translation/transformer/desc.py similarity index 92% rename from PaddleNLP/PaddleMT/transformer/desc.py rename to PaddleNLP/machine_translation/transformer/desc.py index f6fa768adc42ebcebe36eb60b52f7f6e366b3887..3eadd8433a79775b46047a0e74cb7480cbf96657 100644 --- a/PaddleNLP/PaddleMT/transformer/desc.py +++ b/PaddleNLP/machine_translation/transformer/desc.py @@ -12,6 +12,7 @@ # See the License for the specific language governing permissions and # limitations under the License. + def get_input_descs(args): """ Generate a dict mapping data fields to the corresponding data shapes and @@ -42,11 +43,12 @@ def get_input_descs(args): # encoder. # The actual data shape of src_slf_attn_bias is: # [batch_size, n_head, max_src_len_in_batch, max_src_len_in_batch] - "src_slf_attn_bias": [(batch_size, n_head, seq_len, seq_len), "float32"], + "src_slf_attn_bias": + [(batch_size, n_head, seq_len, seq_len), "float32"], # The actual data shape of trg_word is: # [batch_size, max_trg_len_in_batch, 1] "trg_word": [(batch_size, seq_len), "int64", - 2], # lod_level is only used in fast decoder. + 2], # lod_level is only used in fast decoder. # The actual data shape of trg_pos is: # [batch_size, max_trg_len_in_batch, 1] "trg_pos": [(batch_size, seq_len), "int64"], @@ -54,12 +56,14 @@ def get_input_descs(args): # subsequent words in the decoder. # The actual data shape of trg_slf_attn_bias is: # [batch_size, n_head, max_trg_len_in_batch, max_trg_len_in_batch] - "trg_slf_attn_bias": [(batch_size, n_head, seq_len, seq_len), "float32"], + "trg_slf_attn_bias": + [(batch_size, n_head, seq_len, seq_len), "float32"], # This input is used to remove attention weights on paddings of the source # input in the encoder-decoder attention. # The actual data shape of trg_src_attn_bias is: # [batch_size, n_head, max_trg_len_in_batch, max_src_len_in_batch] - "trg_src_attn_bias": [(batch_size, n_head, seq_len, seq_len), "float32"], + "trg_src_attn_bias": + [(batch_size, n_head, seq_len, seq_len), "float32"], # This input is used in independent decoder program for inference. # The actual data shape of enc_output is: # [batch_size, max_src_len_in_batch, d_model] @@ -80,6 +84,7 @@ def get_input_descs(args): return input_descs + # Names of word embedding table which might be reused for weight sharing. word_emb_param_names = ( "src_word_emb_table", diff --git a/PaddleNLP/PaddleMT/transformer/gen_data.sh b/PaddleNLP/machine_translation/transformer/gen_data.sh similarity index 100% rename from PaddleNLP/PaddleMT/transformer/gen_data.sh rename to PaddleNLP/machine_translation/transformer/gen_data.sh diff --git a/PaddleNLP/PaddleMT/transformer/images/multi_head_attention.png b/PaddleNLP/machine_translation/transformer/images/multi_head_attention.png similarity index 100% rename from PaddleNLP/PaddleMT/transformer/images/multi_head_attention.png rename to PaddleNLP/machine_translation/transformer/images/multi_head_attention.png diff --git a/PaddleNLP/PaddleMT/transformer/images/transformer_network.png b/PaddleNLP/machine_translation/transformer/images/transformer_network.png similarity index 100% rename from PaddleNLP/PaddleMT/transformer/images/transformer_network.png rename to PaddleNLP/machine_translation/transformer/images/transformer_network.png diff --git a/PaddleNLP/PaddleMT/transformer/inference_model.py b/PaddleNLP/machine_translation/transformer/inference_model.py similarity index 87% rename from PaddleNLP/PaddleMT/transformer/inference_model.py rename to PaddleNLP/machine_translation/transformer/inference_model.py index 5ff15108935567bbe288d74ed7f4fd729b089233..baf6a7325a661b10407280c1e222c730f5e0087c 100644 --- a/PaddleNLP/PaddleMT/transformer/inference_model.py +++ b/PaddleNLP/machine_translation/transformer/inference_model.py @@ -87,13 +87,14 @@ def do_save_inference_model(args): # saving inference model - fluid.io.save_inference_model(args.inference_model_dir, - feeded_var_names=list(input_field_names), - target_vars=[out_ids, out_scores], - executor=exe, - main_program=test_prog, - model_filename="model.pdmodel", - params_filename="params.pdparams") + fluid.io.save_inference_model( + args.inference_model_dir, + feeded_var_names=list(input_field_names), + target_vars=[out_ids, out_scores], + executor=exe, + main_program=test_prog, + model_filename="model.pdmodel", + params_filename="params.pdparams") print("save inference model at %s" % (args.inference_model_dir)) diff --git a/PaddleNLP/PaddleMT/transformer/main.py b/PaddleNLP/machine_translation/transformer/main.py similarity index 97% rename from PaddleNLP/PaddleMT/transformer/main.py rename to PaddleNLP/machine_translation/transformer/main.py index feaf29baeb386b7843651ff9fc4197861d702c66..0f2412bffea2956a953a0683617d25a7cfc0d402 100644 --- a/PaddleNLP/PaddleMT/transformer/main.py +++ b/PaddleNLP/machine_translation/transformer/main.py @@ -25,7 +25,6 @@ from train import do_train from predict import do_predict from inference_model import do_save_inference_model - if __name__ == "__main__": LOG_FORMAT = "[%(asctime)s %(levelname)s %(filename)s:%(lineno)d] %(message)s" logging.basicConfig( @@ -43,4 +42,4 @@ if __name__ == "__main__": do_predict(args) if args.do_save_inference_model: - do_save_inference_model(args) \ No newline at end of file + do_save_inference_model(args) diff --git a/PaddleNLP/PaddleMT/transformer/predict.py b/PaddleNLP/machine_translation/transformer/predict.py similarity index 100% rename from PaddleNLP/PaddleMT/transformer/predict.py rename to PaddleNLP/machine_translation/transformer/predict.py diff --git a/PaddleNLP/PaddleMT/transformer/reader.py b/PaddleNLP/machine_translation/transformer/reader.py similarity index 100% rename from PaddleNLP/PaddleMT/transformer/reader.py rename to PaddleNLP/machine_translation/transformer/reader.py diff --git a/PaddleNLP/PaddleMT/transformer/train.py b/PaddleNLP/machine_translation/transformer/train.py similarity index 97% rename from PaddleNLP/PaddleMT/transformer/train.py rename to PaddleNLP/machine_translation/transformer/train.py index 129435baba706c37071e836bd8b6745dbafb0b1f..7ae344e63105240145335d04ca97cb289ed974cb 100644 --- a/PaddleNLP/PaddleMT/transformer/train.py +++ b/PaddleNLP/machine_translation/transformer/train.py @@ -142,8 +142,8 @@ def do_train(args): ## init from some checkpoint, to resume the previous training if args.init_from_checkpoint: - load(train_prog, os.path.join(args.init_from_checkpoint, "transformer"), - exe) + load(train_prog, + os.path.join(args.init_from_checkpoint, "transformer"), exe) print("finish initing model from checkpoint from %s" % (args.init_from_checkpoint)) @@ -181,7 +181,7 @@ def do_train(args): batch_id = 0 while True: - if args.max_iter and total_batch_num == args.max_iter: # this for benchmark + if args.max_iter and total_batch_num == args.max_iter: # this for benchmark return try: outs = exe.run(compiled_train_prog, @@ -221,10 +221,9 @@ def do_train(args): "transformer") fluid.save(train_prog, model_path) - batch_id += 1 step_idx += 1 - total_batch_num = total_batch_num + 1 # this is for benchmark + total_batch_num = total_batch_num + 1 # this is for benchmark # profiler tools for benchmark if args.is_profiler and pass_id == 0 and batch_id == args.print_step: diff --git a/PaddleNLP/PaddleMT/transformer/transformer.py b/PaddleNLP/machine_translation/transformer/transformer.py similarity index 84% rename from PaddleNLP/PaddleMT/transformer/transformer.py rename to PaddleNLP/machine_translation/transformer/transformer.py index d260e82fd7648b468c0c254bda98baa9353d48d6..a73ce9f3006a30c0183cb4a71c96ce36c9677b07 100644 --- a/PaddleNLP/PaddleMT/transformer/transformer.py +++ b/PaddleNLP/machine_translation/transformer/transformer.py @@ -25,7 +25,6 @@ from desc import * dropout_seed = None - def wrap_layer_with_block(layer, block_idx): """ Make layer define support indicating block, by which we can add layers @@ -300,15 +299,16 @@ def prepare_encoder_decoder(src_word, src_word, size=[src_vocab_size, src_emb_dim], padding_idx=bos_idx, # set embedding of bos to 0 - param_attr=fluid.ParamAttr(name=word_emb_param_name, - initializer=fluid.initializer.Normal( - 0., src_emb_dim**-0.5))) + param_attr=fluid.ParamAttr( + name=word_emb_param_name, + initializer=fluid.initializer.Normal(0., src_emb_dim**-0.5))) src_word_emb = layers.scale(x=src_word_emb, scale=src_emb_dim**0.5) - src_pos_enc = fluid.embedding(src_pos, - size=[src_max_len, src_emb_dim], - param_attr=fluid.ParamAttr( - name=pos_enc_param_name, trainable=False)) + src_pos_enc = fluid.embedding( + src_pos, + size=[src_max_len, src_emb_dim], + param_attr=fluid.ParamAttr( + name=pos_enc_param_name, trainable=False)) src_pos_enc.stop_gradient = True enc_input = src_word_emb + src_pos_enc return layers.dropout( @@ -477,22 +477,22 @@ def decoder(dec_input, The decoder is composed of a stack of identical decoder_layer layers. """ for i in range(n_layer): - dec_output = decoder_layer(dec_input, - enc_output, - dec_slf_attn_bias, - dec_enc_attn_bias, - n_head, - d_key, - d_value, - d_model, - d_inner_hid, - prepostprocess_dropout, - attention_dropout, - relu_dropout, - preprocess_cmd, - postprocess_cmd, - cache=None if caches is None else - (caches[i], i)) + dec_output = decoder_layer( + dec_input, + enc_output, + dec_slf_attn_bias, + dec_enc_attn_bias, + n_head, + d_key, + d_value, + d_model, + d_inner_hid, + prepostprocess_dropout, + attention_dropout, + relu_dropout, + preprocess_cmd, + postprocess_cmd, + cache=None if caches is None else (caches[i], i)) dec_input = dec_output dec_output = pre_process_layer(dec_output, preprocess_cmd, prepostprocess_dropout) @@ -530,48 +530,51 @@ def transformer(model_input, label = model_input.lbl_word weights = model_input.lbl_weight - enc_output = wrap_encoder(enc_inputs, - src_vocab_size, - max_length, - n_layer, - n_head, - d_key, - d_value, - d_model, - d_inner_hid, - prepostprocess_dropout, - attention_dropout, - relu_dropout, - preprocess_cmd, - postprocess_cmd, - weight_sharing, - bos_idx=bos_idx) - - predict = wrap_decoder(dec_inputs, - trg_vocab_size, - max_length, - n_layer, - n_head, - d_key, - d_value, - d_model, - d_inner_hid, - prepostprocess_dropout, - attention_dropout, - relu_dropout, - preprocess_cmd, - postprocess_cmd, - weight_sharing, - enc_output=enc_output) + enc_output = wrap_encoder( + enc_inputs, + src_vocab_size, + max_length, + n_layer, + n_head, + d_key, + d_value, + d_model, + d_inner_hid, + prepostprocess_dropout, + attention_dropout, + relu_dropout, + preprocess_cmd, + postprocess_cmd, + weight_sharing, + bos_idx=bos_idx) + + predict = wrap_decoder( + dec_inputs, + trg_vocab_size, + max_length, + n_layer, + n_head, + d_key, + d_value, + d_model, + d_inner_hid, + prepostprocess_dropout, + attention_dropout, + relu_dropout, + preprocess_cmd, + postprocess_cmd, + weight_sharing, + enc_output=enc_output) # Padding index do not contribute to the total loss. The weights is used to # cancel padding index in calculating the loss. if label_smooth_eps: # TODO: use fluid.input.one_hot after softmax_with_cross_entropy removing # the enforcement that the last dimension of label must be 1. - label = layers.label_smooth(label=layers.one_hot(input=label, - depth=trg_vocab_size), - epsilon=label_smooth_eps) + label = layers.label_smooth( + label=layers.one_hot( + input=label, depth=trg_vocab_size), + epsilon=label_smooth_eps) cost = layers.softmax_with_cross_entropy( logits=predict, @@ -714,22 +717,23 @@ def fast_decode(model_input, src_vocab_size, trg_vocab_size, max_in_len, dec_inputs = (model_input.trg_word, model_input.init_score, model_input.init_idx, model_input.trg_src_attn_bias) - enc_output = wrap_encoder(enc_inputs, - src_vocab_size, - max_in_len, - n_layer, - n_head, - d_key, - d_value, - d_model, - d_inner_hid, - prepostprocess_dropout, - attention_dropout, - relu_dropout, - preprocess_cmd, - postprocess_cmd, - weight_sharing, - bos_idx=bos_idx) + enc_output = wrap_encoder( + enc_inputs, + src_vocab_size, + max_in_len, + n_layer, + n_head, + d_key, + d_value, + d_model, + d_inner_hid, + prepostprocess_dropout, + attention_dropout, + relu_dropout, + preprocess_cmd, + postprocess_cmd, + weight_sharing, + bos_idx=bos_idx) start_tokens, init_scores, parent_idx, trg_src_attn_bias = dec_inputs def beam_search(): @@ -763,9 +767,15 @@ def fast_decode(model_input, src_vocab_size, trg_vocab_size, max_in_len, dtype=enc_output.dtype, value=0), "static_k": # for encoder-decoder attention - fluid.data(shape=[None, n_head, 0, d_key], dtype=enc_output.dtype, name=("static_k_%d"%i)), + fluid.data( + shape=[None, n_head, 0, d_key], + dtype=enc_output.dtype, + name=("static_k_%d" % i)), "static_v": # for encoder-decoder attention - fluid.data(shape=[None, n_head, 0, d_value], dtype=enc_output.dtype, name=("static_v_%d"%i)), + fluid.data( + shape=[None, n_head, 0, d_value], + dtype=enc_output.dtype, + name=("static_v_%d" % i)), } for i in range(n_layer) ] @@ -780,8 +790,8 @@ def fast_decode(model_input, src_vocab_size, trg_vocab_size, max_in_len, # gather cell states corresponding to selected parent pre_caches = map_structure( lambda x: layers.gather(x, index=gather_idx), caches) - pre_src_attn_bias = layers.gather(trg_src_attn_bias, - index=gather_idx) + pre_src_attn_bias = layers.gather( + trg_src_attn_bias, index=gather_idx) pre_pos = layers.elementwise_mul( x=layers.fill_constant_batch_size_like( input=pre_src_attn_bias, # cann't use lod tensor here @@ -790,30 +800,30 @@ def fast_decode(model_input, src_vocab_size, trg_vocab_size, max_in_len, dtype=pre_ids.dtype), y=step_idx, axis=0) - logits = wrap_decoder((pre_ids, pre_pos, None, pre_src_attn_bias), - trg_vocab_size, - max_in_len, - n_layer, - n_head, - d_key, - d_value, - d_model, - d_inner_hid, - prepostprocess_dropout, - attention_dropout, - relu_dropout, - preprocess_cmd, - postprocess_cmd, - weight_sharing, - enc_output=enc_output, - caches=pre_caches, - bos_idx=bos_idx) + logits = wrap_decoder( + (pre_ids, pre_pos, None, pre_src_attn_bias), + trg_vocab_size, + max_in_len, + n_layer, + n_head, + d_key, + d_value, + d_model, + d_inner_hid, + prepostprocess_dropout, + attention_dropout, + relu_dropout, + preprocess_cmd, + postprocess_cmd, + weight_sharing, + enc_output=enc_output, + caches=pre_caches, + bos_idx=bos_idx) # intra-beam topK topk_scores, topk_indices = layers.topk( input=layers.softmax(logits), k=beam_size) - accu_scores = layers.elementwise_add(x=layers.log(topk_scores), - y=pre_scores, - axis=0) + accu_scores = layers.elementwise_add( + x=layers.log(topk_scores), y=pre_scores, axis=0) # beam_search op uses lod to differentiate branches. accu_scores = layers.lod_reset(accu_scores, pre_ids) # topK reduction across beams, also contain special handle of @@ -832,13 +842,14 @@ def fast_decode(model_input, src_vocab_size, trg_vocab_size, max_in_len, return (step_idx, selected_ids, selected_scores, gather_idx, pre_caches, pre_src_attn_bias) - _ = layers.while_loop(cond=cond_func, - body=body_func, - loop_vars=[ - step_idx, start_tokens, init_scores, - parent_idx, caches, trg_src_attn_bias - ], - is_test=True) + _ = layers.while_loop( + cond=cond_func, + body=body_func, + loop_vars=[ + step_idx, start_tokens, init_scores, parent_idx, caches, + trg_src_attn_bias + ], + is_test=True) finished_ids, finished_scores = layers.beam_search_decode( ids, scores, beam_size=beam_size, end_id=eos_idx) diff --git a/PaddleNLP/PaddleMT/transformer/transformer.yaml b/PaddleNLP/machine_translation/transformer/transformer.yaml similarity index 100% rename from PaddleNLP/PaddleMT/transformer/transformer.yaml rename to PaddleNLP/machine_translation/transformer/transformer.yaml diff --git a/PaddleNLP/PaddleLARK/BERT/model/__init__.py b/PaddleNLP/machine_translation/transformer/utils/__init__.py similarity index 100% rename from PaddleNLP/PaddleLARK/BERT/model/__init__.py rename to PaddleNLP/machine_translation/transformer/utils/__init__.py diff --git a/PaddleNLP/PaddleMT/transformer/utils/check.py b/PaddleNLP/machine_translation/transformer/utils/check.py similarity index 100% rename from PaddleNLP/PaddleMT/transformer/utils/check.py rename to PaddleNLP/machine_translation/transformer/utils/check.py diff --git a/PaddleNLP/PaddleMT/transformer/utils/configure.py b/PaddleNLP/machine_translation/transformer/utils/configure.py similarity index 96% rename from PaddleNLP/PaddleMT/transformer/utils/configure.py rename to PaddleNLP/machine_translation/transformer/utils/configure.py index 67e601282fee572518435eaed38a4ed8e26fc5f9..874b69ba8f034b379a6854cdc41da46fb60fcc52 100644 --- a/PaddleNLP/PaddleMT/transformer/utils/configure.py +++ b/PaddleNLP/machine_translation/transformer/utils/configure.py @@ -199,9 +199,14 @@ class PDConfig(object): "Whether to perform model saving for inference.") # NOTE: args for profiler - self.default_g.add_arg("is_profiler", int, 0, "the switch of profiler tools. (used for benchmark)") - self.default_g.add_arg("profiler_path", str, './', "the profiler output file path. (used for benchmark)") - self.default_g.add_arg("max_iter", int, 0, "the max train batch num.(used for benchmark)") + self.default_g.add_arg( + "is_profiler", int, 0, + "the switch of profiler tools. (used for benchmark)") + self.default_g.add_arg( + "profiler_path", str, './', + "the profiler output file path. (used for benchmark)") + self.default_g.add_arg("max_iter", int, 0, + "the max train batch num.(used for benchmark)") self.parser = parser diff --git a/PaddleNLP/PaddleMT/transformer/utils/dist_utils.py b/PaddleNLP/machine_translation/transformer/utils/dist_utils.py similarity index 100% rename from PaddleNLP/PaddleMT/transformer/utils/dist_utils.py rename to PaddleNLP/machine_translation/transformer/utils/dist_utils.py diff --git a/PaddleNLP/PaddleMT/transformer/utils/input_field.py b/PaddleNLP/machine_translation/transformer/utils/input_field.py similarity index 100% rename from PaddleNLP/PaddleMT/transformer/utils/input_field.py rename to PaddleNLP/machine_translation/transformer/utils/input_field.py diff --git a/PaddleNLP/PaddleMT/transformer/utils/load.py b/PaddleNLP/machine_translation/transformer/utils/load.py similarity index 100% rename from PaddleNLP/PaddleMT/transformer/utils/load.py rename to PaddleNLP/machine_translation/transformer/utils/load.py diff --git a/PaddleNLP/PaddleLARK/BERT/.run_ce.sh b/PaddleNLP/pretrain_langauge_models/BERT/.run_ce.sh similarity index 100% rename from PaddleNLP/PaddleLARK/BERT/.run_ce.sh rename to PaddleNLP/pretrain_langauge_models/BERT/.run_ce.sh diff --git a/PaddleNLP/PaddleLARK/BERT/README.md b/PaddleNLP/pretrain_langauge_models/BERT/README.md similarity index 99% rename from PaddleNLP/PaddleLARK/BERT/README.md rename to PaddleNLP/pretrain_langauge_models/BERT/README.md index 47ceb84eef23fc0d621da5a839e80ebb10ccf4a0..b7770c7fdf5a952b63d7af733cb9278e67e67c64 100644 --- a/PaddleNLP/PaddleLARK/BERT/README.md +++ b/PaddleNLP/pretrain_langauge_models/BERT/README.md @@ -415,5 +415,3 @@ for (size_t i = 0; i < output.front().data.length() / sizeof(float); i += 3) { << static_cast(output.front().data.data())[i + 2] << std::endl; } ``` - - diff --git a/PaddleNLP/PaddleLARK/BERT/reader/__init__.py b/PaddleNLP/pretrain_langauge_models/BERT/__init__.py similarity index 100% rename from PaddleNLP/PaddleLARK/BERT/reader/__init__.py rename to PaddleNLP/pretrain_langauge_models/BERT/__init__.py diff --git a/PaddleNLP/PaddleLARK/BERT/_ce.py b/PaddleNLP/pretrain_langauge_models/BERT/_ce.py similarity index 100% rename from PaddleNLP/PaddleLARK/BERT/_ce.py rename to PaddleNLP/pretrain_langauge_models/BERT/_ce.py diff --git a/PaddleNLP/PaddleLARK/BERT/batching.py b/PaddleNLP/pretrain_langauge_models/BERT/batching.py similarity index 100% rename from PaddleNLP/PaddleLARK/BERT/batching.py rename to PaddleNLP/pretrain_langauge_models/BERT/batching.py diff --git a/PaddleNLP/PaddleLARK/BERT/convert_params.py b/PaddleNLP/pretrain_langauge_models/BERT/convert_params.py similarity index 100% rename from PaddleNLP/PaddleLARK/BERT/convert_params.py rename to PaddleNLP/pretrain_langauge_models/BERT/convert_params.py diff --git a/PaddleNLP/PaddleLARK/BERT/data/demo_config/bert_config.json b/PaddleNLP/pretrain_langauge_models/BERT/data/demo_config/bert_config.json similarity index 100% rename from PaddleNLP/PaddleLARK/BERT/data/demo_config/bert_config.json rename to PaddleNLP/pretrain_langauge_models/BERT/data/demo_config/bert_config.json diff --git a/PaddleNLP/PaddleLARK/BERT/data/demo_config/vocab.txt b/PaddleNLP/pretrain_langauge_models/BERT/data/demo_config/vocab.txt similarity index 100% rename from PaddleNLP/PaddleLARK/BERT/data/demo_config/vocab.txt rename to PaddleNLP/pretrain_langauge_models/BERT/data/demo_config/vocab.txt diff --git a/PaddleNLP/PaddleLARK/BERT/data/demo_wiki_tokens.txt b/PaddleNLP/pretrain_langauge_models/BERT/data/demo_wiki_tokens.txt similarity index 100% rename from PaddleNLP/PaddleLARK/BERT/data/demo_wiki_tokens.txt rename to PaddleNLP/pretrain_langauge_models/BERT/data/demo_wiki_tokens.txt diff --git a/PaddleNLP/PaddleLARK/BERT/data/train/demo_wiki_train.gz b/PaddleNLP/pretrain_langauge_models/BERT/data/train/demo_wiki_train.gz similarity index 100% rename from PaddleNLP/PaddleLARK/BERT/data/train/demo_wiki_train.gz rename to PaddleNLP/pretrain_langauge_models/BERT/data/train/demo_wiki_train.gz diff --git a/PaddleNLP/PaddleLARK/BERT/data/validation/demo_wiki_validation.gz b/PaddleNLP/pretrain_langauge_models/BERT/data/validation/demo_wiki_validation.gz similarity index 100% rename from PaddleNLP/PaddleLARK/BERT/data/validation/demo_wiki_validation.gz rename to PaddleNLP/pretrain_langauge_models/BERT/data/validation/demo_wiki_validation.gz diff --git a/PaddleNLP/PaddleLARK/BERT/dist_utils.py b/PaddleNLP/pretrain_langauge_models/BERT/dist_utils.py similarity index 100% rename from PaddleNLP/PaddleLARK/BERT/dist_utils.py rename to PaddleNLP/pretrain_langauge_models/BERT/dist_utils.py diff --git a/PaddleNLP/PaddleLARK/BERT/inference/CMakeLists.txt b/PaddleNLP/pretrain_langauge_models/BERT/inference/CMakeLists.txt similarity index 100% rename from PaddleNLP/PaddleLARK/BERT/inference/CMakeLists.txt rename to PaddleNLP/pretrain_langauge_models/BERT/inference/CMakeLists.txt diff --git a/PaddleNLP/PaddleLARK/BERT/inference/README.md b/PaddleNLP/pretrain_langauge_models/BERT/inference/README.md similarity index 100% rename from PaddleNLP/PaddleLARK/BERT/inference/README.md rename to PaddleNLP/pretrain_langauge_models/BERT/inference/README.md diff --git a/PaddleNLP/PaddleLARK/BERT/inference/gen_demo_data.py b/PaddleNLP/pretrain_langauge_models/BERT/inference/gen_demo_data.py similarity index 100% rename from PaddleNLP/PaddleLARK/BERT/inference/gen_demo_data.py rename to PaddleNLP/pretrain_langauge_models/BERT/inference/gen_demo_data.py diff --git a/PaddleNLP/PaddleLARK/BERT/inference/inference.cc b/PaddleNLP/pretrain_langauge_models/BERT/inference/inference.cc similarity index 100% rename from PaddleNLP/PaddleLARK/BERT/inference/inference.cc rename to PaddleNLP/pretrain_langauge_models/BERT/inference/inference.cc diff --git a/PaddleNLP/PaddleLARK/BERT/utils/__init__.py b/PaddleNLP/pretrain_langauge_models/BERT/model/__init__.py similarity index 100% rename from PaddleNLP/PaddleLARK/BERT/utils/__init__.py rename to PaddleNLP/pretrain_langauge_models/BERT/model/__init__.py diff --git a/PaddleNLP/PaddleLARK/BERT/model/bert.py b/PaddleNLP/pretrain_langauge_models/BERT/model/bert.py similarity index 100% rename from PaddleNLP/PaddleLARK/BERT/model/bert.py rename to PaddleNLP/pretrain_langauge_models/BERT/model/bert.py diff --git a/PaddleNLP/PaddleLARK/BERT/model/classifier.py b/PaddleNLP/pretrain_langauge_models/BERT/model/classifier.py similarity index 100% rename from PaddleNLP/PaddleLARK/BERT/model/classifier.py rename to PaddleNLP/pretrain_langauge_models/BERT/model/classifier.py diff --git a/PaddleNLP/PaddleLARK/BERT/model/transformer_encoder.py b/PaddleNLP/pretrain_langauge_models/BERT/model/transformer_encoder.py similarity index 100% rename from PaddleNLP/PaddleLARK/BERT/model/transformer_encoder.py rename to PaddleNLP/pretrain_langauge_models/BERT/model/transformer_encoder.py diff --git a/PaddleNLP/PaddleLARK/BERT/optimization.py b/PaddleNLP/pretrain_langauge_models/BERT/optimization.py similarity index 100% rename from PaddleNLP/PaddleLARK/BERT/optimization.py rename to PaddleNLP/pretrain_langauge_models/BERT/optimization.py diff --git a/PaddleNLP/PaddleLARK/BERT/predict_classifier.py b/PaddleNLP/pretrain_langauge_models/BERT/predict_classifier.py similarity index 100% rename from PaddleNLP/PaddleLARK/BERT/predict_classifier.py rename to PaddleNLP/pretrain_langauge_models/BERT/predict_classifier.py diff --git a/PaddleNLP/PaddleLARK/ELMo/__init__.py b/PaddleNLP/pretrain_langauge_models/BERT/reader/__init__.py similarity index 100% rename from PaddleNLP/PaddleLARK/ELMo/__init__.py rename to PaddleNLP/pretrain_langauge_models/BERT/reader/__init__.py diff --git a/PaddleNLP/PaddleLARK/BERT/reader/cls.py b/PaddleNLP/pretrain_langauge_models/BERT/reader/cls.py similarity index 100% rename from PaddleNLP/PaddleLARK/BERT/reader/cls.py rename to PaddleNLP/pretrain_langauge_models/BERT/reader/cls.py diff --git a/PaddleNLP/PaddleLARK/BERT/reader/pretraining.py b/PaddleNLP/pretrain_langauge_models/BERT/reader/pretraining.py similarity index 100% rename from PaddleNLP/PaddleLARK/BERT/reader/pretraining.py rename to PaddleNLP/pretrain_langauge_models/BERT/reader/pretraining.py diff --git a/PaddleNLP/PaddleLARK/BERT/reader/squad.py b/PaddleNLP/pretrain_langauge_models/BERT/reader/squad.py similarity index 100% rename from PaddleNLP/PaddleLARK/BERT/reader/squad.py rename to PaddleNLP/pretrain_langauge_models/BERT/reader/squad.py diff --git a/PaddleNLP/PaddleLARK/BERT/run_classifier.py b/PaddleNLP/pretrain_langauge_models/BERT/run_classifier.py similarity index 100% rename from PaddleNLP/PaddleLARK/BERT/run_classifier.py rename to PaddleNLP/pretrain_langauge_models/BERT/run_classifier.py diff --git a/PaddleNLP/PaddleLARK/BERT/run_squad.py b/PaddleNLP/pretrain_langauge_models/BERT/run_squad.py similarity index 100% rename from PaddleNLP/PaddleLARK/BERT/run_squad.py rename to PaddleNLP/pretrain_langauge_models/BERT/run_squad.py diff --git a/PaddleNLP/PaddleLARK/BERT/test_local_dist.sh b/PaddleNLP/pretrain_langauge_models/BERT/test_local_dist.sh similarity index 100% rename from PaddleNLP/PaddleLARK/BERT/test_local_dist.sh rename to PaddleNLP/pretrain_langauge_models/BERT/test_local_dist.sh diff --git a/PaddleNLP/PaddleLARK/BERT/tokenization.py b/PaddleNLP/pretrain_langauge_models/BERT/tokenization.py similarity index 100% rename from PaddleNLP/PaddleLARK/BERT/tokenization.py rename to PaddleNLP/pretrain_langauge_models/BERT/tokenization.py diff --git a/PaddleNLP/PaddleLARK/BERT/train.py b/PaddleNLP/pretrain_langauge_models/BERT/train.py similarity index 100% rename from PaddleNLP/PaddleLARK/BERT/train.py rename to PaddleNLP/pretrain_langauge_models/BERT/train.py diff --git a/PaddleNLP/PaddleLARK/BERT/train.sh b/PaddleNLP/pretrain_langauge_models/BERT/train.sh similarity index 100% rename from PaddleNLP/PaddleLARK/BERT/train.sh rename to PaddleNLP/pretrain_langauge_models/BERT/train.sh diff --git a/PaddleNLP/PaddleLARK/ELMo/utils/__init__.py b/PaddleNLP/pretrain_langauge_models/BERT/utils/__init__.py similarity index 100% rename from PaddleNLP/PaddleLARK/ELMo/utils/__init__.py rename to PaddleNLP/pretrain_langauge_models/BERT/utils/__init__.py diff --git a/PaddleNLP/PaddleLARK/BERT/utils/args.py b/PaddleNLP/pretrain_langauge_models/BERT/utils/args.py similarity index 100% rename from PaddleNLP/PaddleLARK/BERT/utils/args.py rename to PaddleNLP/pretrain_langauge_models/BERT/utils/args.py diff --git a/PaddleNLP/PaddleLARK/BERT/utils/cards.py b/PaddleNLP/pretrain_langauge_models/BERT/utils/cards.py similarity index 100% rename from PaddleNLP/PaddleLARK/BERT/utils/cards.py rename to PaddleNLP/pretrain_langauge_models/BERT/utils/cards.py diff --git a/PaddleNLP/PaddleLARK/BERT/utils/fp16.py b/PaddleNLP/pretrain_langauge_models/BERT/utils/fp16.py similarity index 100% rename from PaddleNLP/PaddleLARK/BERT/utils/fp16.py rename to PaddleNLP/pretrain_langauge_models/BERT/utils/fp16.py diff --git a/PaddleNLP/PaddleLARK/BERT/utils/init.py b/PaddleNLP/pretrain_langauge_models/BERT/utils/init.py similarity index 100% rename from PaddleNLP/PaddleLARK/BERT/utils/init.py rename to PaddleNLP/pretrain_langauge_models/BERT/utils/init.py diff --git a/PaddleNLP/PaddleLARK/ELMo/.run_ce.sh b/PaddleNLP/pretrain_langauge_models/ELMo/.run_ce.sh similarity index 100% rename from PaddleNLP/PaddleLARK/ELMo/.run_ce.sh rename to PaddleNLP/pretrain_langauge_models/ELMo/.run_ce.sh diff --git a/PaddleNLP/PaddleLARK/ELMo/LAC_demo/bilm.py b/PaddleNLP/pretrain_langauge_models/ELMo/LAC_demo/bilm.py similarity index 100% rename from PaddleNLP/PaddleLARK/ELMo/LAC_demo/bilm.py rename to PaddleNLP/pretrain_langauge_models/ELMo/LAC_demo/bilm.py diff --git a/PaddleNLP/PaddleLARK/ELMo/LAC_demo/conf/q2b.dic b/PaddleNLP/pretrain_langauge_models/ELMo/LAC_demo/conf/q2b.dic similarity index 100% rename from PaddleNLP/PaddleLARK/ELMo/LAC_demo/conf/q2b.dic rename to PaddleNLP/pretrain_langauge_models/ELMo/LAC_demo/conf/q2b.dic diff --git a/PaddleNLP/PaddleLARK/ELMo/LAC_demo/data/dev/dev.tsv b/PaddleNLP/pretrain_langauge_models/ELMo/LAC_demo/data/dev/dev.tsv similarity index 100% rename from PaddleNLP/PaddleLARK/ELMo/LAC_demo/data/dev/dev.tsv rename to PaddleNLP/pretrain_langauge_models/ELMo/LAC_demo/data/dev/dev.tsv diff --git a/PaddleNLP/PaddleLARK/ELMo/LAC_demo/data/tag.dic b/PaddleNLP/pretrain_langauge_models/ELMo/LAC_demo/data/tag.dic similarity index 100% rename from PaddleNLP/PaddleLARK/ELMo/LAC_demo/data/tag.dic rename to PaddleNLP/pretrain_langauge_models/ELMo/LAC_demo/data/tag.dic diff --git a/PaddleNLP/PaddleLARK/ELMo/LAC_demo/data/train/train.tsv b/PaddleNLP/pretrain_langauge_models/ELMo/LAC_demo/data/train/train.tsv similarity index 100% rename from PaddleNLP/PaddleLARK/ELMo/LAC_demo/data/train/train.tsv rename to PaddleNLP/pretrain_langauge_models/ELMo/LAC_demo/data/train/train.tsv diff --git a/PaddleNLP/PaddleLARK/ELMo/LAC_demo/network.py b/PaddleNLP/pretrain_langauge_models/ELMo/LAC_demo/network.py similarity index 100% rename from PaddleNLP/PaddleLARK/ELMo/LAC_demo/network.py rename to PaddleNLP/pretrain_langauge_models/ELMo/LAC_demo/network.py diff --git a/PaddleNLP/PaddleLARK/ELMo/LAC_demo/reader.py b/PaddleNLP/pretrain_langauge_models/ELMo/LAC_demo/reader.py similarity index 100% rename from PaddleNLP/PaddleLARK/ELMo/LAC_demo/reader.py rename to PaddleNLP/pretrain_langauge_models/ELMo/LAC_demo/reader.py diff --git a/PaddleNLP/PaddleLARK/ELMo/LAC_demo/run.sh b/PaddleNLP/pretrain_langauge_models/ELMo/LAC_demo/run.sh similarity index 100% rename from PaddleNLP/PaddleLARK/ELMo/LAC_demo/run.sh rename to PaddleNLP/pretrain_langauge_models/ELMo/LAC_demo/run.sh diff --git a/PaddleNLP/PaddleLARK/ELMo/LAC_demo/train.py b/PaddleNLP/pretrain_langauge_models/ELMo/LAC_demo/train.py similarity index 100% rename from PaddleNLP/PaddleLARK/ELMo/LAC_demo/train.py rename to PaddleNLP/pretrain_langauge_models/ELMo/LAC_demo/train.py diff --git a/PaddleNLP/PaddleLARK/ELMo/README.md b/PaddleNLP/pretrain_langauge_models/ELMo/README.md similarity index 99% rename from PaddleNLP/PaddleLARK/ELMo/README.md rename to PaddleNLP/pretrain_langauge_models/ELMo/README.md index edf79e2b509e9eb5c3faf85c281a5ab3680adec7..4f4b87b118fb46ec932676584ec17aad1c5bab45 100755 --- a/PaddleNLP/PaddleLARK/ELMo/README.md +++ b/PaddleNLP/pretrain_langauge_models/ELMo/README.md @@ -90,5 +90,3 @@ word_embedding=fluid.layers.concat(input=[elmo_embedding, word_embedding], axis= ### 参考论文 [Deep contextualized word representations](https://arxiv.org/abs/1802.05365) - - diff --git a/PaddleNLP/PaddleLARK/XLNet/model/__init__.py b/PaddleNLP/pretrain_langauge_models/ELMo/__init__.py similarity index 100% rename from PaddleNLP/PaddleLARK/XLNet/model/__init__.py rename to PaddleNLP/pretrain_langauge_models/ELMo/__init__.py diff --git a/PaddleNLP/PaddleLARK/ELMo/_ce.py b/PaddleNLP/pretrain_langauge_models/ELMo/_ce.py similarity index 100% rename from PaddleNLP/PaddleLARK/ELMo/_ce.py rename to PaddleNLP/pretrain_langauge_models/ELMo/_ce.py diff --git a/PaddleNLP/PaddleLARK/ELMo/args.py b/PaddleNLP/pretrain_langauge_models/ELMo/args.py similarity index 100% rename from PaddleNLP/PaddleLARK/ELMo/args.py rename to PaddleNLP/pretrain_langauge_models/ELMo/args.py diff --git a/PaddleNLP/PaddleLARK/ELMo/data.py b/PaddleNLP/pretrain_langauge_models/ELMo/data.py similarity index 100% rename from PaddleNLP/PaddleLARK/ELMo/data.py rename to PaddleNLP/pretrain_langauge_models/ELMo/data.py diff --git a/PaddleNLP/PaddleLARK/ELMo/data/dev/sentence_file.txt b/PaddleNLP/pretrain_langauge_models/ELMo/data/dev/sentence_file.txt similarity index 100% rename from PaddleNLP/PaddleLARK/ELMo/data/dev/sentence_file.txt rename to PaddleNLP/pretrain_langauge_models/ELMo/data/dev/sentence_file.txt diff --git a/PaddleNLP/PaddleLARK/ELMo/data/dev/sentence_file_2.txt b/PaddleNLP/pretrain_langauge_models/ELMo/data/dev/sentence_file_2.txt similarity index 100% rename from PaddleNLP/PaddleLARK/ELMo/data/dev/sentence_file_2.txt rename to PaddleNLP/pretrain_langauge_models/ELMo/data/dev/sentence_file_2.txt diff --git a/PaddleNLP/PaddleLARK/ELMo/data/train/sentence_file.txt b/PaddleNLP/pretrain_langauge_models/ELMo/data/train/sentence_file.txt similarity index 100% rename from PaddleNLP/PaddleLARK/ELMo/data/train/sentence_file.txt rename to PaddleNLP/pretrain_langauge_models/ELMo/data/train/sentence_file.txt diff --git a/PaddleNLP/PaddleLARK/ELMo/data/train/sentence_file_1.txt b/PaddleNLP/pretrain_langauge_models/ELMo/data/train/sentence_file_1.txt similarity index 100% rename from PaddleNLP/PaddleLARK/ELMo/data/train/sentence_file_1.txt rename to PaddleNLP/pretrain_langauge_models/ELMo/data/train/sentence_file_1.txt diff --git a/PaddleNLP/PaddleLARK/ELMo/data/vocabulary_min5k.txt b/PaddleNLP/pretrain_langauge_models/ELMo/data/vocabulary_min5k.txt similarity index 100% rename from PaddleNLP/PaddleLARK/ELMo/data/vocabulary_min5k.txt rename to PaddleNLP/pretrain_langauge_models/ELMo/data/vocabulary_min5k.txt diff --git a/PaddleNLP/PaddleLARK/ELMo/lm_model.py b/PaddleNLP/pretrain_langauge_models/ELMo/lm_model.py similarity index 100% rename from PaddleNLP/PaddleLARK/ELMo/lm_model.py rename to PaddleNLP/pretrain_langauge_models/ELMo/lm_model.py diff --git a/PaddleNLP/PaddleLARK/ELMo/run.sh b/PaddleNLP/pretrain_langauge_models/ELMo/run.sh similarity index 100% rename from PaddleNLP/PaddleLARK/ELMo/run.sh rename to PaddleNLP/pretrain_langauge_models/ELMo/run.sh diff --git a/PaddleNLP/PaddleLARK/ELMo/train.py b/PaddleNLP/pretrain_langauge_models/ELMo/train.py similarity index 100% rename from PaddleNLP/PaddleLARK/ELMo/train.py rename to PaddleNLP/pretrain_langauge_models/ELMo/train.py diff --git a/PaddleNLP/PaddleLARK/XLNet/reader/__init__.py b/PaddleNLP/pretrain_langauge_models/ELMo/utils/__init__.py similarity index 100% rename from PaddleNLP/PaddleLARK/XLNet/reader/__init__.py rename to PaddleNLP/pretrain_langauge_models/ELMo/utils/__init__.py diff --git a/PaddleNLP/PaddleLARK/ELMo/utils/cards.py b/PaddleNLP/pretrain_langauge_models/ELMo/utils/cards.py similarity index 100% rename from PaddleNLP/PaddleLARK/ELMo/utils/cards.py rename to PaddleNLP/pretrain_langauge_models/ELMo/utils/cards.py diff --git a/PaddleNLP/PaddleLARK/ELMo/utils/init.py b/PaddleNLP/pretrain_langauge_models/ELMo/utils/init.py similarity index 100% rename from PaddleNLP/PaddleLARK/ELMo/utils/init.py rename to PaddleNLP/pretrain_langauge_models/ELMo/utils/init.py diff --git a/PaddleNLP/PaddleLARK/XLNet/.run_ce.sh b/PaddleNLP/pretrain_langauge_models/XLNet/.run_ce.sh similarity index 100% rename from PaddleNLP/PaddleLARK/XLNet/.run_ce.sh rename to PaddleNLP/pretrain_langauge_models/XLNet/.run_ce.sh diff --git a/PaddleNLP/PaddleLARK/XLNet/README.md b/PaddleNLP/pretrain_langauge_models/XLNet/README.md similarity index 100% rename from PaddleNLP/PaddleLARK/XLNet/README.md rename to PaddleNLP/pretrain_langauge_models/XLNet/README.md diff --git a/PaddleNLP/PaddleLARK/XLNet/README_cn.md b/PaddleNLP/pretrain_langauge_models/XLNet/README_cn.md similarity index 100% rename from PaddleNLP/PaddleLARK/XLNet/README_cn.md rename to PaddleNLP/pretrain_langauge_models/XLNet/README_cn.md diff --git a/PaddleNLP/PaddleLARK/XLNet/_ce.py b/PaddleNLP/pretrain_langauge_models/XLNet/_ce.py similarity index 99% rename from PaddleNLP/PaddleLARK/XLNet/_ce.py rename to PaddleNLP/pretrain_langauge_models/XLNet/_ce.py index 094434273c106f0b0862a8bc0e63f54f4f208289..f35f533c513fa21f335a65e0fea3648a9470e0d8 100644 --- a/PaddleNLP/PaddleLARK/XLNet/_ce.py +++ b/PaddleNLP/pretrain_langauge_models/XLNet/_ce.py @@ -7,7 +7,6 @@ from kpi import CostKpi, DurationKpi, AccKpi #### NOTE kpi.py should shared in models in some way!!!! - train_duration_sts_b_card1 = DurationKpi( 'train_duration_sts_b_card1', 0.01, 0, actived=True) train_cost_sts_b_card1 = CostKpi( diff --git a/PaddleNLP/PaddleLARK/XLNet/classifier_utils.py b/PaddleNLP/pretrain_langauge_models/XLNet/classifier_utils.py similarity index 100% rename from PaddleNLP/PaddleLARK/XLNet/classifier_utils.py rename to PaddleNLP/pretrain_langauge_models/XLNet/classifier_utils.py diff --git a/PaddleNLP/PaddleLARK/XLNet/data_utils.py b/PaddleNLP/pretrain_langauge_models/XLNet/data_utils.py similarity index 100% rename from PaddleNLP/PaddleLARK/XLNet/data_utils.py rename to PaddleNLP/pretrain_langauge_models/XLNet/data_utils.py diff --git a/PaddleNLP/PaddleLARK/XLNet/utils/__init__.py b/PaddleNLP/pretrain_langauge_models/XLNet/model/__init__.py similarity index 100% rename from PaddleNLP/PaddleLARK/XLNet/utils/__init__.py rename to PaddleNLP/pretrain_langauge_models/XLNet/model/__init__.py diff --git a/PaddleNLP/PaddleLARK/XLNet/model/classifier.py b/PaddleNLP/pretrain_langauge_models/XLNet/model/classifier.py similarity index 100% rename from PaddleNLP/PaddleLARK/XLNet/model/classifier.py rename to PaddleNLP/pretrain_langauge_models/XLNet/model/classifier.py diff --git a/PaddleNLP/PaddleLARK/XLNet/model/xlnet.py b/PaddleNLP/pretrain_langauge_models/XLNet/model/xlnet.py similarity index 100% rename from PaddleNLP/PaddleLARK/XLNet/model/xlnet.py rename to PaddleNLP/pretrain_langauge_models/XLNet/model/xlnet.py diff --git a/PaddleNLP/PaddleLARK/XLNet/modeling.py b/PaddleNLP/pretrain_langauge_models/XLNet/modeling.py similarity index 100% rename from PaddleNLP/PaddleLARK/XLNet/modeling.py rename to PaddleNLP/pretrain_langauge_models/XLNet/modeling.py diff --git a/PaddleNLP/PaddleLARK/XLNet/optimization.py b/PaddleNLP/pretrain_langauge_models/XLNet/optimization.py similarity index 100% rename from PaddleNLP/PaddleLARK/XLNet/optimization.py rename to PaddleNLP/pretrain_langauge_models/XLNet/optimization.py diff --git a/PaddleNLP/PaddleLARK/XLNet/prepro_utils.py b/PaddleNLP/pretrain_langauge_models/XLNet/prepro_utils.py similarity index 100% rename from PaddleNLP/PaddleLARK/XLNet/prepro_utils.py rename to PaddleNLP/pretrain_langauge_models/XLNet/prepro_utils.py diff --git a/PaddleNLP/PaddleMT/transformer/__init__.py b/PaddleNLP/pretrain_langauge_models/XLNet/reader/__init__.py similarity index 100% rename from PaddleNLP/PaddleMT/transformer/__init__.py rename to PaddleNLP/pretrain_langauge_models/XLNet/reader/__init__.py diff --git a/PaddleNLP/PaddleLARK/XLNet/reader/cls.py b/PaddleNLP/pretrain_langauge_models/XLNet/reader/cls.py similarity index 100% rename from PaddleNLP/PaddleLARK/XLNet/reader/cls.py rename to PaddleNLP/pretrain_langauge_models/XLNet/reader/cls.py diff --git a/PaddleNLP/PaddleLARK/XLNet/reader/squad.py b/PaddleNLP/pretrain_langauge_models/XLNet/reader/squad.py similarity index 100% rename from PaddleNLP/PaddleLARK/XLNet/reader/squad.py rename to PaddleNLP/pretrain_langauge_models/XLNet/reader/squad.py diff --git a/PaddleNLP/PaddleLARK/XLNet/run_classifier.py b/PaddleNLP/pretrain_langauge_models/XLNet/run_classifier.py similarity index 100% rename from PaddleNLP/PaddleLARK/XLNet/run_classifier.py rename to PaddleNLP/pretrain_langauge_models/XLNet/run_classifier.py diff --git a/PaddleNLP/PaddleLARK/XLNet/run_squad.py b/PaddleNLP/pretrain_langauge_models/XLNet/run_squad.py similarity index 100% rename from PaddleNLP/PaddleLARK/XLNet/run_squad.py rename to PaddleNLP/pretrain_langauge_models/XLNet/run_squad.py diff --git a/PaddleNLP/PaddleLARK/XLNet/squad_utils.py b/PaddleNLP/pretrain_langauge_models/XLNet/squad_utils.py similarity index 100% rename from PaddleNLP/PaddleLARK/XLNet/squad_utils.py rename to PaddleNLP/pretrain_langauge_models/XLNet/squad_utils.py diff --git a/PaddleNLP/PaddleMT/transformer/utils/__init__.py b/PaddleNLP/pretrain_langauge_models/XLNet/utils/__init__.py similarity index 100% rename from PaddleNLP/PaddleMT/transformer/utils/__init__.py rename to PaddleNLP/pretrain_langauge_models/XLNet/utils/__init__.py diff --git a/PaddleNLP/PaddleLARK/XLNet/utils/args.py b/PaddleNLP/pretrain_langauge_models/XLNet/utils/args.py similarity index 100% rename from PaddleNLP/PaddleLARK/XLNet/utils/args.py rename to PaddleNLP/pretrain_langauge_models/XLNet/utils/args.py diff --git a/PaddleNLP/PaddleLARK/XLNet/utils/cards.py b/PaddleNLP/pretrain_langauge_models/XLNet/utils/cards.py similarity index 100% rename from PaddleNLP/PaddleLARK/XLNet/utils/cards.py rename to PaddleNLP/pretrain_langauge_models/XLNet/utils/cards.py diff --git a/PaddleNLP/PaddleLARK/XLNet/utils/init.py b/PaddleNLP/pretrain_langauge_models/XLNet/utils/init.py similarity index 100% rename from PaddleNLP/PaddleLARK/XLNet/utils/init.py rename to PaddleNLP/pretrain_langauge_models/XLNet/utils/init.py diff --git a/PaddleNLP/sentiment_classification/inference_model_ernie.py b/PaddleNLP/sentiment_classification/inference_model_ernie.py index 397ca6677630a44c640f93df096ab5e18963c55d..99d421eed0cd05b2cab43acbc162ef4ce161e277 100644 --- a/PaddleNLP/sentiment_classification/inference_model_ernie.py +++ b/PaddleNLP/sentiment_classification/inference_model_ernie.py @@ -1,8 +1,8 @@ # -*- coding: utf_8 -*- import os import sys -sys.path.append("../") -sys.path.append("../models/classification") +sys.path.append("../shared_modules/") +sys.path.append("../shared_modules/models/classification") import paddle import paddle.fluid as fluid import numpy as np @@ -17,6 +17,7 @@ from models.representation.ernie import ErnieConfig from models.representation.ernie import ernie_encoder, ernie_encoder_with_paddle_hub from preprocess.ernie import task_reader + def do_save_inference_model(args): ernie_config = ErnieConfig(args.ernie_config_path) @@ -28,30 +29,29 @@ def do_save_inference_model(args): else: dev_count = int(os.environ.get('CPU_NUM', 1)) place = fluid.CPUPlace() - + exe = fluid.Executor(place) - + test_prog = fluid.Program() startup_prog = fluid.Program() with fluid.program_guard(test_prog, startup_prog): with fluid.unique_name.guard(): infer_pyreader, ernie_inputs, labels = ernie_pyreader( - args, - pyreader_name="infer_reader") - + args, pyreader_name="infer_reader") + if args.use_paddle_hub: - embeddings = ernie_encoder_with_paddle_hub(ernie_inputs, args.max_seq_len) + embeddings = ernie_encoder_with_paddle_hub(ernie_inputs, + args.max_seq_len) else: - embeddings = ernie_encoder(ernie_inputs, ernie_config=ernie_config) + embeddings = ernie_encoder( + ernie_inputs, ernie_config=ernie_config) - probs = create_model(args, - embeddings, - labels=labels, - is_prediction=True) + probs = create_model( + args, embeddings, labels=labels, is_prediction=True) test_prog = test_prog.clone(for_test=True) exe.run(startup_prog) - + assert (args.init_checkpoint) if args.init_checkpoint: @@ -59,11 +59,11 @@ def do_save_inference_model(args): fluid.io.save_inference_model( args.inference_model_dir, - feeded_var_names=[ernie_inputs["src_ids"].name, - ernie_inputs["sent_ids"].name, - ernie_inputs["pos_ids"].name, - ernie_inputs["input_mask"].name, - ernie_inputs["seq_lens"].name], + feeded_var_names=[ + ernie_inputs["src_ids"].name, ernie_inputs["sent_ids"].name, + ernie_inputs["pos_ids"].name, ernie_inputs["input_mask"].name, + ernie_inputs["seq_lens"].name + ], target_vars=[probs], executor=exe, main_program=test_prog, @@ -72,6 +72,7 @@ def do_save_inference_model(args): print("save inference model at %s" % (args.inference_model_dir)) + def inference(exe, test_program, test_pyreader, fetch_list, infer_phrase): """ Inference Function @@ -80,13 +81,16 @@ def inference(exe, test_program, test_pyreader, fetch_list, infer_phrase): test_pyreader.start() while True: try: - np_props = exe.run(program=test_program, fetch_list=fetch_list, return_numpy=True) + np_props = exe.run(program=test_program, + fetch_list=fetch_list, + return_numpy=True) for probs in np_props[0]: print("%d\t%f\t%f" % (np.argmax(probs), probs[0], probs[1])) except fluid.core.EOFException: test_pyreader.reset() break + def test_inference_model(args): ernie_config = ErnieConfig(args.ernie_config_path) ernie_config.print_config() @@ -97,9 +101,9 @@ def test_inference_model(args): else: dev_count = int(os.environ.get('CPU_NUM', 1)) place = fluid.CPUPlace() - + exe = fluid.Executor(place) - + reader = task_reader.ClassifyReader( vocab_path=args.vocab_path, label_map_config=args.label_map_config, @@ -113,15 +117,11 @@ def test_inference_model(args): with fluid.program_guard(test_prog, startup_prog): with fluid.unique_name.guard(): infer_pyreader, ernie_inputs, labels = ernie_pyreader( - args, - pyreader_name="infer_pyreader") + args, pyreader_name="infer_pyreader") embeddings = ernie_encoder(ernie_inputs, ernie_config=ernie_config) probs = create_model( - args, - embeddings, - labels=labels, - is_prediction=True) + args, embeddings, labels=labels, is_prediction=True) test_prog = test_prog.clone(for_test=True) exe.run(startup_prog) @@ -129,7 +129,7 @@ def test_inference_model(args): assert (args.inference_model_dir) infer_data_generator = reader.data_generator( input_file=args.test_set, - batch_size=args.batch_size/dev_count, + batch_size=args.batch_size / dev_count, phase="infer", epoch=1, shuffle=False) @@ -141,8 +141,8 @@ def test_inference_model(args): params_filename="params.pdparams") infer_pyreader.set_batch_generator(infer_data_generator) - inference(exe, test_prog, infer_pyreader, - [probs.name], "infer") + inference(exe, test_prog, infer_pyreader, [probs.name], "infer") + if __name__ == "__main__": args = PDConfig() diff --git a/PaddleNLP/sentiment_classification/run_classifier.py b/PaddleNLP/sentiment_classification/run_classifier.py index eb651716ce930e601c863c7c5e88bb3b5f889187..93340a2926badff44cc0cacc114c73c082a0e9fa 100644 --- a/PaddleNLP/sentiment_classification/run_classifier.py +++ b/PaddleNLP/sentiment_classification/run_classifier.py @@ -12,8 +12,8 @@ import argparse import numpy as np import multiprocessing import sys -sys.path.append("../models/classification/") -sys.path.append("../") +sys.path.append("../shared_modules/models/classification/") +sys.path.append("../shared_modules/") from nets import bow_net from nets import lstm_net @@ -30,24 +30,19 @@ import paddle.fluid as fluid import reader from utils import init_checkpoint -def create_model(args, - pyreader_name, - num_labels, - is_prediction=False): +def create_model(args, pyreader_name, num_labels, is_prediction=False): """ Create Model for sentiment classification """ - + data = fluid.data( - name="src_ids", shape=[None, args.max_seq_len], dtype='int64') - label = fluid.data( - name="label", shape=[None, 1], dtype="int64") - seq_len = fluid.data( - name="seq_len", shape=[None], dtype="int64") - - data_reader = fluid.io.DataLoader.from_generator(feed_list=[data, label, seq_len], - capacity=4, iterable=False) + name="src_ids", shape=[None, args.max_seq_len], dtype='int64') + label = fluid.data(name="label", shape=[None, 1], dtype="int64") + seq_len = fluid.data(name="seq_len", shape=[None], dtype="int64") + + data_reader = fluid.io.DataLoader.from_generator( + feed_list=[data, label, seq_len], capacity=4, iterable=False) if args.model_type == "bilstm_net": network = bilstm_net @@ -63,18 +58,19 @@ def create_model(args, raise ValueError("Unknown network type!") if is_prediction: - probs = network(data, seq_len, None, args.vocab_size, is_prediction=is_prediction) + probs = network( + data, seq_len, None, args.vocab_size, is_prediction=is_prediction) print("create inference model...") return data_reader, probs, [data.name, seq_len.name] - ce_loss, probs = network(data, seq_len, label, args.vocab_size, is_prediction=is_prediction) + ce_loss, probs = network( + data, seq_len, label, args.vocab_size, is_prediction=is_prediction) loss = fluid.layers.mean(x=ce_loss) num_seqs = fluid.layers.create_tensor(dtype='int64') accuracy = fluid.layers.accuracy(input=probs, label=label, total=num_seqs) return data_reader, loss, accuracy, num_seqs - def evaluate(exe, test_program, test_pyreader, fetch_list, eval_phase): """ Evaluation Function @@ -99,8 +95,8 @@ def evaluate(exe, test_program, test_pyreader, fetch_list, eval_phase): break time_end = time.time() print("[%s evaluation] ave loss: %f, ave acc: %f, elapsed time: %f s" % - (eval_phase, np.sum(total_cost) / np.sum(total_num_seqs), - np.sum(total_acc) / np.sum(total_num_seqs), time_end - time_begin)) + (eval_phase, np.sum(total_cost) / np.sum(total_num_seqs), + np.sum(total_acc) / np.sum(total_num_seqs), time_end - time_begin)) def inference(exe, test_program, test_pyreader, fetch_list, infer_phrase): @@ -111,8 +107,9 @@ def inference(exe, test_program, test_pyreader, fetch_list, infer_phrase): time_begin = time.time() while True: try: - np_props = exe.run(program=test_program, fetch_list=fetch_list, - return_numpy=True) + np_props = exe.run(program=test_program, + fetch_list=fetch_list, + return_numpy=True) for probs in np_props[0]: print("%d\t%f\t%f" % (np.argmax(probs), probs[0], probs[1])) except fluid.core.EOFException: @@ -135,10 +132,11 @@ def main(args): exe = fluid.Executor(place) task_name = args.task_name.lower() - processor = reader.SentaProcessor(data_dir=args.data_dir, - vocab_path=args.vocab_path, - random_seed=args.random_seed, - max_seq_len=args.max_seq_len) + processor = reader.SentaProcessor( + data_dir=args.data_dir, + vocab_path=args.vocab_path, + random_seed=args.random_seed, + max_seq_len=args.max_seq_len) num_labels = len(processor.get_labels()) if not (args.do_train or args.do_val or args.do_infer): @@ -151,7 +149,7 @@ def main(args): if args.do_train: train_data_generator = processor.data_generator( - batch_size=args.batch_size/dev_count, + batch_size=args.batch_size / dev_count, phase='train', epoch=args.epoch, shuffle=True) @@ -183,11 +181,11 @@ def main(args): lower_mem, upper_mem, unit = fluid.contrib.memory_usage( program=train_program, batch_size=args.batch_size) print("Theoretical memory usage in training: %.3f - %.3f %s" % - (lower_mem, upper_mem, unit)) + (lower_mem, upper_mem, unit)) if args.do_val: test_data_generator = processor.data_generator( - batch_size=args.batch_size/dev_count, + batch_size=args.batch_size / dev_count, phase='dev', epoch=1, shuffle=False) @@ -204,7 +202,7 @@ def main(args): if args.do_infer: infer_data_generator = processor.data_generator( - batch_size=args.batch_size/dev_count, + batch_size=args.batch_size / dev_count, phase='infer', epoch=1, shuffle=False) @@ -223,18 +221,13 @@ def main(args): if args.do_train: if args.init_checkpoint: init_checkpoint( - exe, - args.init_checkpoint, - main_program=startup_prog) + exe, args.init_checkpoint, main_program=startup_prog) elif args.do_val or args.do_infer: if not args.init_checkpoint: raise ValueError("args 'init_checkpoint' should be set if" "only doing validation or testing!") - init_checkpoint( - exe, - args.init_checkpoint, - main_program=startup_prog) + init_checkpoint(exe, args.init_checkpoint, main_program=startup_prog) if args.do_train: train_exe = exe @@ -262,7 +255,9 @@ def main(args): else: fetch_list = [] - outputs = train_exe.run(program=train_program, fetch_list=fetch_list, return_numpy=False) + outputs = train_exe.run(program=train_program, + fetch_list=fetch_list, + return_numpy=False) #print("finished one step") if steps % args.skip_steps == 0: np_loss, np_acc, np_num_seqs = outputs @@ -274,23 +269,23 @@ def main(args): total_num_seqs.extend(np_num_seqs) if args.verbose: - verbose = "train pyreader queue size: %d, " % train_pyreader.queue.size() + verbose = "train pyreader queue size: %d, " % train_pyreader.queue.size( + ) print(verbose) time_end = time.time() used_time = time_end - time_begin print("step: %d, ave loss: %f, " - "ave acc: %f, speed: %f steps/s" % - (steps, np.sum(total_cost) / np.sum(total_num_seqs), - np.sum(total_acc) / np.sum(total_num_seqs), - args.skip_steps / used_time)) + "ave acc: %f, speed: %f steps/s" % + (steps, np.sum(total_cost) / np.sum(total_num_seqs), + np.sum(total_acc) / np.sum(total_num_seqs), + args.skip_steps / used_time)) total_cost, total_acc, total_num_seqs = [], [], [] time_begin = time.time() if steps % args.save_steps == 0: save_path = os.path.join(args.checkpoints, - "step_" + str(steps), - "checkpoint") + "step_" + str(steps), "checkpoint") fluid.save(train_program, save_path) if steps % args.validation_steps == 0: @@ -298,8 +293,8 @@ def main(args): if args.do_val: print("do evalatation") evaluate(exe, test_prog, test_reader, - [loss.name, accuracy.name, num_seqs.name], - "dev") + [loss.name, accuracy.name, num_seqs.name], + "dev") except fluid.core.EOFException: save_path = os.path.join(args.checkpoints, "step_" + str(steps), @@ -312,13 +307,12 @@ def main(args): if args.do_val: print("Final validation result:") evaluate(exe, test_prog, test_reader, - [loss.name, accuracy.name, num_seqs.name], "dev") + [loss.name, accuracy.name, num_seqs.name], "dev") # final eval on test set if args.do_infer: print("Final test result:") - inference(exe, infer_prog, infer_reader, - [prop.name], "infer") + inference(exe, infer_prog, infer_reader, [prop.name], "infer") if __name__ == "__main__": diff --git a/PaddleNLP/sentiment_classification/run_ernie_classifier.py b/PaddleNLP/sentiment_classification/run_ernie_classifier.py index 13d1447185ea4b758c530d595407f937c95edf4d..21ab742616d4cf62f63583df7ede742be87138ef 100644 --- a/PaddleNLP/sentiment_classification/run_ernie_classifier.py +++ b/PaddleNLP/sentiment_classification/run_ernie_classifier.py @@ -16,8 +16,8 @@ import sys import paddle import paddle.fluid as fluid -sys.path.append("../models/classification/") -sys.path.append("..") +sys.path.append("../shared_modules/models/classification/") +sys.path.append("../shared_modules/") print(sys.path) from nets import bow_net @@ -36,6 +36,7 @@ from config import PDConfig from utils import init_checkpoint + def ernie_pyreader(args, pyreader_name): src_ids = fluid.data( name="src_ids", shape=[None, args.max_seq_len, 1], dtype="int64") @@ -45,31 +46,27 @@ def ernie_pyreader(args, pyreader_name): name="pos_ids", shape=[None, args.max_seq_len, 1], dtype="int64") input_mask = fluid.data( name="input_mask", shape=[None, args.max_seq_len, 1], dtype="float32") - labels = fluid.data( - name="labels", shape=[None, 1], dtype="int64") - seq_lens = fluid.data( - name="seq_lens", shape=[None], dtype="int64") - + labels = fluid.data(name="labels", shape=[None, 1], dtype="int64") + seq_lens = fluid.data(name="seq_lens", shape=[None], dtype="int64") + pyreader = fluid.io.DataLoader.from_generator( feed_list=[src_ids, sent_ids, pos_ids, input_mask, labels, seq_lens], capacity=50, iterable=False, use_double_buffer=True) - + ernie_inputs = { "src_ids": src_ids, "sent_ids": sent_ids, "pos_ids": pos_ids, "input_mask": input_mask, - "seq_lens": seq_lens} - + "seq_lens": seq_lens + } + return pyreader, ernie_inputs, labels -def create_model(args, - embeddings, - labels, - is_prediction=False): +def create_model(args, embeddings, labels, is_prediction=False): """ Create Model for sentiment classification based on ERNIE encoder """ @@ -78,11 +75,11 @@ def create_model(args, if args.model_type == "ernie_base": ce_loss, probs = ernie_base_net(sentence_embeddings, labels, - args.num_labels) + args.num_labels) elif args.model_type == "ernie_bilstm": ce_loss, probs = ernie_bilstm_net(token_embeddings, labels, - args.num_labels) + args.num_labels) else: raise ValueError("Unknown network type!") @@ -120,8 +117,8 @@ def evaluate(exe, test_program, test_pyreader, fetch_list, eval_phase): break time_end = time.time() print("[%s evaluation] ave loss: %f, ave acc: %f, elapsed time: %f s" % - (eval_phase, np.sum(total_cost) / np.sum(total_num_seqs), - np.sum(total_acc) / np.sum(total_num_seqs), time_end - time_begin)) + (eval_phase, np.sum(total_cost) / np.sum(total_num_seqs), + np.sum(total_acc) / np.sum(total_num_seqs), time_end - time_begin)) def infer(exe, infer_program, infer_pyreader, fetch_list, infer_phase): @@ -132,8 +129,9 @@ def infer(exe, infer_program, infer_pyreader, fetch_list, infer_phase): time_begin = time.time() while True: try: - batch_probs = exe.run(program=infer_program, fetch_list=fetch_list, - return_numpy=True) + batch_probs = exe.run(program=infer_program, + fetch_list=fetch_list, + return_numpy=True) for probs in batch_probs[0]: print("%d\t%f\t%f" % (np.argmax(probs), probs[0], probs[1])) except fluid.core.EOFException: @@ -195,21 +193,19 @@ def main(args): with fluid.unique_name.guard(): # create ernie_pyreader train_pyreader, ernie_inputs, labels = ernie_pyreader( - args, - pyreader_name='train_pyreader') + args, pyreader_name='train_pyreader') # get ernie_embeddings if args.use_paddle_hub: - embeddings = ernie_encoder_with_paddle_hub(ernie_inputs, args.max_seq_len) + embeddings = ernie_encoder_with_paddle_hub(ernie_inputs, + args.max_seq_len) else: - embeddings = ernie_encoder(ernie_inputs, ernie_config=ernie_config) + embeddings = ernie_encoder( + ernie_inputs, ernie_config=ernie_config) # user defined model based on ernie embeddings loss, accuracy, num_seqs = create_model( - args, - embeddings, - labels=labels, - is_prediction=False) + args, embeddings, labels=labels, is_prediction=False) optimizer = fluid.optimizer.Adam(learning_rate=args.lr) optimizer.minimize(loss) @@ -218,62 +214,59 @@ def main(args): lower_mem, upper_mem, unit = fluid.contrib.memory_usage( program=train_program, batch_size=args.batch_size) print("Theoretical memory usage in training: %.3f - %.3f %s" % - (lower_mem, upper_mem, unit)) + (lower_mem, upper_mem, unit)) if args.do_val: test_data_generator = reader.data_generator( - input_file=args.dev_set, - batch_size=args.batch_size, - phase='dev', - epoch=1, - shuffle=False) + input_file=args.dev_set, + batch_size=args.batch_size, + phase='dev', + epoch=1, + shuffle=False) test_prog = fluid.Program() with fluid.program_guard(test_prog, startup_prog): with fluid.unique_name.guard(): # create ernie_pyreader test_pyreader, ernie_inputs, labels = ernie_pyreader( - args, - pyreader_name='eval_reader') + args, pyreader_name='eval_reader') # get ernie_embeddings if args.use_paddle_hub: - embeddings = ernie_encoder_with_paddle_hub(ernie_inputs, args.max_seq_len) + embeddings = ernie_encoder_with_paddle_hub(ernie_inputs, + args.max_seq_len) else: - embeddings = ernie_encoder(ernie_inputs, ernie_config=ernie_config) + embeddings = ernie_encoder( + ernie_inputs, ernie_config=ernie_config) # user defined model based on ernie embeddings loss, accuracy, num_seqs = create_model( - args, - embeddings, - labels=labels, - is_prediction=False) + args, embeddings, labels=labels, is_prediction=False) test_prog = test_prog.clone(for_test=True) if args.do_infer: infer_data_generator = reader.data_generator( - input_file=args.test_set, - batch_size=args.batch_size, - phase='infer', - epoch=1, - shuffle=False) + input_file=args.test_set, + batch_size=args.batch_size, + phase='infer', + epoch=1, + shuffle=False) infer_prog = fluid.Program() with fluid.program_guard(infer_prog, startup_prog): with fluid.unique_name.guard(): infer_pyreader, ernie_inputs, labels = ernie_pyreader( - args, - pyreader_name="infer_pyreader") + args, pyreader_name="infer_pyreader") # get ernie_embeddings if args.use_paddle_hub: - embeddings = ernie_encoder_with_paddle_hub(ernie_inputs, args.max_seq_len) + embeddings = ernie_encoder_with_paddle_hub(ernie_inputs, + args.max_seq_len) else: - embeddings = ernie_encoder(ernie_inputs, ernie_config=ernie_config) + embeddings = ernie_encoder( + ernie_inputs, ernie_config=ernie_config) - probs = create_model(args, - embeddings, - labels=labels, - is_prediction=True) + probs = create_model( + args, embeddings, labels=labels, is_prediction=True) infer_prog = infer_prog.clone(for_test=True) @@ -282,25 +275,17 @@ def main(args): if args.do_train: if args.init_checkpoint: init_checkpoint( - exe, - args.init_checkpoint, - main_program=train_program) + exe, args.init_checkpoint, main_program=train_program) elif args.do_val: if not args.init_checkpoint: raise ValueError("args 'init_checkpoint' should be set if" "only doing validation or testing!") - init_checkpoint( - exe, - args.init_checkpoint, - main_program=test_prog) + init_checkpoint(exe, args.init_checkpoint, main_program=test_prog) elif args.do_infer: if not args.init_checkpoint: raise ValueError("args 'init_checkpoint' should be set if" "only doing validation or testing!") - init_checkpoint( - exe, - args.init_checkpoint, - main_program=infer_prog) + init_checkpoint(exe, args.init_checkpoint, main_program=infer_prog) if args.do_train: train_exe = exe @@ -327,7 +312,9 @@ def main(args): else: fetch_list = [] - outputs = train_exe.run(program=train_program, fetch_list=fetch_list, return_numpy=False) + outputs = train_exe.run(program=train_program, + fetch_list=fetch_list, + return_numpy=False) if steps % args.skip_steps == 0: np_loss, np_acc, np_num_seqs = outputs np_loss = np.array(np_loss) @@ -338,31 +325,31 @@ def main(args): total_num_seqs.extend(np_num_seqs) if args.verbose: - verbose = "train pyreader queue size: %d, " % train_pyreader.queue.size() + verbose = "train pyreader queue size: %d, " % train_pyreader.queue.size( + ) print(verbose) time_end = time.time() used_time = time_end - time_begin print("step: %d, ave loss: %f, " - "ave acc: %f, speed: %f steps/s" % - (steps, np.sum(total_cost) / np.sum(total_num_seqs), - np.sum(total_acc) / np.sum(total_num_seqs), - args.skip_steps / used_time)) + "ave acc: %f, speed: %f steps/s" % + (steps, np.sum(total_cost) / np.sum(total_num_seqs), + np.sum(total_acc) / np.sum(total_num_seqs), + args.skip_steps / used_time)) total_cost, total_acc, total_num_seqs = [], [], [] time_begin = time.time() if steps % args.save_steps == 0: save_path = os.path.join(args.checkpoints, - "step_" + str(steps), - "checkpoint") + "step_" + str(steps), "checkpoint") fluid.save(train_program, save_path) if steps % args.validation_steps == 0: # evaluate dev set if args.do_val: evaluate(exe, test_prog, test_pyreader, - [loss.name, accuracy.name, num_seqs.name], - "dev") + [loss.name, accuracy.name, num_seqs.name], + "dev") except fluid.core.EOFException: save_path = os.path.join(args.checkpoints, "step_" + str(steps), @@ -375,13 +362,13 @@ def main(args): if args.do_val: print("Final validation result:") evaluate(exe, test_prog, test_pyreader, - [loss.name, accuracy.name, num_seqs.name], "dev") + [loss.name, accuracy.name, num_seqs.name], "dev") # final eval on test set if args.do_infer: print("Final test result:") - infer(exe, infer_prog, infer_pyreader, - [probs.name], "infer") + infer(exe, infer_prog, infer_pyreader, [probs.name], "infer") + if __name__ == "__main__": args = PDConfig() diff --git a/PaddleNLP/PaddleTextGEN/seq2seq/README.md b/PaddleNLP/seq2seq/seq2seq/README.md similarity index 100% rename from PaddleNLP/PaddleTextGEN/seq2seq/README.md rename to PaddleNLP/seq2seq/seq2seq/README.md diff --git a/PaddleNLP/PaddleTextGEN/seq2seq/__init__.py b/PaddleNLP/seq2seq/seq2seq/__init__.py similarity index 100% rename from PaddleNLP/PaddleTextGEN/seq2seq/__init__.py rename to PaddleNLP/seq2seq/seq2seq/__init__.py diff --git a/PaddleNLP/PaddleTextGEN/seq2seq/args.py b/PaddleNLP/seq2seq/seq2seq/args.py similarity index 100% rename from PaddleNLP/PaddleTextGEN/seq2seq/args.py rename to PaddleNLP/seq2seq/seq2seq/args.py diff --git a/PaddleNLP/PaddleTextGEN/seq2seq/attention_model.py b/PaddleNLP/seq2seq/seq2seq/attention_model.py similarity index 100% rename from PaddleNLP/PaddleTextGEN/seq2seq/attention_model.py rename to PaddleNLP/seq2seq/seq2seq/attention_model.py diff --git a/PaddleNLP/PaddleTextGEN/seq2seq/base_model.py b/PaddleNLP/seq2seq/seq2seq/base_model.py similarity index 100% rename from PaddleNLP/PaddleTextGEN/seq2seq/base_model.py rename to PaddleNLP/seq2seq/seq2seq/base_model.py diff --git a/PaddleNLP/PaddleTextGEN/seq2seq/download.py b/PaddleNLP/seq2seq/seq2seq/download.py similarity index 100% rename from PaddleNLP/PaddleTextGEN/seq2seq/download.py rename to PaddleNLP/seq2seq/seq2seq/download.py diff --git a/PaddleNLP/PaddleTextGEN/seq2seq/infer.py b/PaddleNLP/seq2seq/seq2seq/infer.py similarity index 100% rename from PaddleNLP/PaddleTextGEN/seq2seq/infer.py rename to PaddleNLP/seq2seq/seq2seq/infer.py diff --git a/PaddleNLP/PaddleTextGEN/seq2seq/infer.sh b/PaddleNLP/seq2seq/seq2seq/infer.sh similarity index 100% rename from PaddleNLP/PaddleTextGEN/seq2seq/infer.sh rename to PaddleNLP/seq2seq/seq2seq/infer.sh diff --git a/PaddleNLP/PaddleTextGEN/seq2seq/reader.py b/PaddleNLP/seq2seq/seq2seq/reader.py similarity index 100% rename from PaddleNLP/PaddleTextGEN/seq2seq/reader.py rename to PaddleNLP/seq2seq/seq2seq/reader.py diff --git a/PaddleNLP/PaddleTextGEN/seq2seq/run.sh b/PaddleNLP/seq2seq/seq2seq/run.sh similarity index 100% rename from PaddleNLP/PaddleTextGEN/seq2seq/run.sh rename to PaddleNLP/seq2seq/seq2seq/run.sh diff --git a/PaddleNLP/PaddleTextGEN/seq2seq/train.py b/PaddleNLP/seq2seq/seq2seq/train.py similarity index 98% rename from PaddleNLP/PaddleTextGEN/seq2seq/train.py rename to PaddleNLP/seq2seq/seq2seq/train.py index 08c7d5a0a8ea0a1e6f48e3ae1da67ed362154658..33d8b51b71584f745ad2009d56f4b5ab5b0c9a41 100644 --- a/PaddleNLP/PaddleTextGEN/seq2seq/train.py +++ b/PaddleNLP/seq2seq/seq2seq/train.py @@ -214,7 +214,7 @@ def main(): ce_ppl.append(np.exp(total_loss / word_count)) total_loss = 0.0 word_count = 0.0 - + # profiler tools if args.profile and epoch_id == 0 and batch_id == 100: profiler.reset_profiler() @@ -230,8 +230,7 @@ def main(): if not args.profile: save_path = os.path.join(args.model_path, - "epoch_" + str(epoch_id), - "checkpoint") + "epoch_" + str(epoch_id), "checkpoint") print("begin to save", save_path) fluid.save(train_program, save_path) print("save finished") diff --git a/PaddleNLP/PaddleTextGEN/variational_seq2seq/README.md b/PaddleNLP/seq2seq/variational_seq2seq/README.md similarity index 100% rename from PaddleNLP/PaddleTextGEN/variational_seq2seq/README.md rename to PaddleNLP/seq2seq/variational_seq2seq/README.md diff --git a/PaddleNLP/PaddleTextGEN/variational_seq2seq/__init__.py b/PaddleNLP/seq2seq/variational_seq2seq/__init__.py similarity index 100% rename from PaddleNLP/PaddleTextGEN/variational_seq2seq/__init__.py rename to PaddleNLP/seq2seq/variational_seq2seq/__init__.py diff --git a/PaddleNLP/PaddleTextGEN/variational_seq2seq/args.py b/PaddleNLP/seq2seq/variational_seq2seq/args.py similarity index 100% rename from PaddleNLP/PaddleTextGEN/variational_seq2seq/args.py rename to PaddleNLP/seq2seq/variational_seq2seq/args.py diff --git a/PaddleNLP/PaddleTextGEN/variational_seq2seq/download.py b/PaddleNLP/seq2seq/variational_seq2seq/download.py similarity index 100% rename from PaddleNLP/PaddleTextGEN/variational_seq2seq/download.py rename to PaddleNLP/seq2seq/variational_seq2seq/download.py diff --git a/PaddleNLP/PaddleTextGEN/variational_seq2seq/infer.py b/PaddleNLP/seq2seq/variational_seq2seq/infer.py similarity index 100% rename from PaddleNLP/PaddleTextGEN/variational_seq2seq/infer.py rename to PaddleNLP/seq2seq/variational_seq2seq/infer.py diff --git a/PaddleNLP/PaddleTextGEN/variational_seq2seq/infer.sh b/PaddleNLP/seq2seq/variational_seq2seq/infer.sh similarity index 100% rename from PaddleNLP/PaddleTextGEN/variational_seq2seq/infer.sh rename to PaddleNLP/seq2seq/variational_seq2seq/infer.sh diff --git a/PaddleNLP/PaddleTextGEN/variational_seq2seq/model.py b/PaddleNLP/seq2seq/variational_seq2seq/model.py similarity index 100% rename from PaddleNLP/PaddleTextGEN/variational_seq2seq/model.py rename to PaddleNLP/seq2seq/variational_seq2seq/model.py diff --git a/PaddleNLP/PaddleTextGEN/variational_seq2seq/reader.py b/PaddleNLP/seq2seq/variational_seq2seq/reader.py similarity index 100% rename from PaddleNLP/PaddleTextGEN/variational_seq2seq/reader.py rename to PaddleNLP/seq2seq/variational_seq2seq/reader.py diff --git a/PaddleNLP/PaddleTextGEN/variational_seq2seq/run.sh b/PaddleNLP/seq2seq/variational_seq2seq/run.sh similarity index 100% rename from PaddleNLP/PaddleTextGEN/variational_seq2seq/run.sh rename to PaddleNLP/seq2seq/variational_seq2seq/run.sh diff --git a/PaddleNLP/PaddleTextGEN/variational_seq2seq/train.py b/PaddleNLP/seq2seq/variational_seq2seq/train.py similarity index 98% rename from PaddleNLP/PaddleTextGEN/variational_seq2seq/train.py rename to PaddleNLP/seq2seq/variational_seq2seq/train.py index 1a1883e74f3abc2adc0dece7c8cfdf6e281a431d..ae2973fb03b4d9aa7e617a8f839616d4a4f569a6 100644 --- a/PaddleNLP/PaddleTextGEN/variational_seq2seq/train.py +++ b/PaddleNLP/seq2seq/variational_seq2seq/train.py @@ -256,8 +256,8 @@ def main(): best_ppl = test_ppl best_epoch_id = epoch_id save_path = os.path.join(args.model_path, - "epoch_" + str(best_epoch_id), - "checkpoint") + "epoch_" + str(best_epoch_id), + "checkpoint") print("save model {}".format(save_path)) fluid.save(main_program, save_path) else: diff --git a/PaddleNLP/models/__init__.py b/PaddleNLP/shared_modules/__init__.py similarity index 100% rename from PaddleNLP/models/__init__.py rename to PaddleNLP/shared_modules/__init__.py diff --git a/PaddleNLP/models/classification/__init__.py b/PaddleNLP/shared_modules/models/__init__.py similarity index 100% rename from PaddleNLP/models/classification/__init__.py rename to PaddleNLP/shared_modules/models/__init__.py diff --git a/PaddleNLP/models/language_model/__init__.py b/PaddleNLP/shared_modules/models/classification/__init__.py similarity index 100% rename from PaddleNLP/models/language_model/__init__.py rename to PaddleNLP/shared_modules/models/classification/__init__.py diff --git a/PaddleNLP/models/classification/nets.py b/PaddleNLP/shared_modules/models/classification/nets.py similarity index 99% rename from PaddleNLP/models/classification/nets.py rename to PaddleNLP/shared_modules/models/classification/nets.py index c66b3927972472d1bba60c49dac9dc5f70795634..d05f006823e143dff2e241a768785130882f6e79 100644 --- a/PaddleNLP/models/classification/nets.py +++ b/PaddleNLP/shared_modules/models/classification/nets.py @@ -4,6 +4,7 @@ This module provide nets for text classification import paddle.fluid as fluid + def bow_net(data, seq_len, label, diff --git a/PaddleNLP/models/matching/__init__.py b/PaddleNLP/shared_modules/models/language_model/__init__.py similarity index 100% rename from PaddleNLP/models/matching/__init__.py rename to PaddleNLP/shared_modules/models/language_model/__init__.py diff --git a/PaddleNLP/models/language_model/lm_model.py b/PaddleNLP/shared_modules/models/language_model/lm_model.py similarity index 100% rename from PaddleNLP/models/language_model/lm_model.py rename to PaddleNLP/shared_modules/models/language_model/lm_model.py diff --git a/PaddleNLP/models/matching/losses/__init__.py b/PaddleNLP/shared_modules/models/matching/__init__.py similarity index 100% rename from PaddleNLP/models/matching/losses/__init__.py rename to PaddleNLP/shared_modules/models/matching/__init__.py diff --git a/PaddleNLP/models/matching/bow.py b/PaddleNLP/shared_modules/models/matching/bow.py similarity index 100% rename from PaddleNLP/models/matching/bow.py rename to PaddleNLP/shared_modules/models/matching/bow.py diff --git a/PaddleNLP/models/matching/cnn.py b/PaddleNLP/shared_modules/models/matching/cnn.py similarity index 94% rename from PaddleNLP/models/matching/cnn.py rename to PaddleNLP/shared_modules/models/matching/cnn.py index 3e31292cabfb9ea5a340b7941bcc2ef3885d3202..f78b5bee511ba107ad2ae7819768186364f20142 100644 --- a/PaddleNLP/models/matching/cnn.py +++ b/PaddleNLP/shared_modules/models/matching/cnn.py @@ -43,8 +43,8 @@ class CNN(object): left_emb = emb_layer.ops(left) right_emb = emb_layer.ops(right) # Presentation context - cnn_layer = layers.SequenceConvPoolLayer( - self.filter_size, self.num_filters, "conv") + cnn_layer = layers.SequenceConvPoolLayer(self.filter_size, + self.num_filters, "conv") left_cnn = cnn_layer.ops(left_emb) right_cnn = cnn_layer.ops(right_emb) # matching layer diff --git a/PaddleNLP/models/matching/gru.py b/PaddleNLP/shared_modules/models/matching/gru.py similarity index 100% rename from PaddleNLP/models/matching/gru.py rename to PaddleNLP/shared_modules/models/matching/gru.py diff --git a/PaddleNLP/models/matching/optimizers/__init__.py b/PaddleNLP/shared_modules/models/matching/losses/__init__.py similarity index 100% rename from PaddleNLP/models/matching/optimizers/__init__.py rename to PaddleNLP/shared_modules/models/matching/losses/__init__.py diff --git a/PaddleNLP/models/matching/losses/hinge_loss.py b/PaddleNLP/shared_modules/models/matching/losses/hinge_loss.py similarity index 100% rename from PaddleNLP/models/matching/losses/hinge_loss.py rename to PaddleNLP/shared_modules/models/matching/losses/hinge_loss.py diff --git a/PaddleNLP/models/matching/losses/log_loss.py b/PaddleNLP/shared_modules/models/matching/losses/log_loss.py similarity index 100% rename from PaddleNLP/models/matching/losses/log_loss.py rename to PaddleNLP/shared_modules/models/matching/losses/log_loss.py diff --git a/PaddleNLP/models/matching/losses/softmax_cross_entropy_loss.py b/PaddleNLP/shared_modules/models/matching/losses/softmax_cross_entropy_loss.py similarity index 100% rename from PaddleNLP/models/matching/losses/softmax_cross_entropy_loss.py rename to PaddleNLP/shared_modules/models/matching/losses/softmax_cross_entropy_loss.py diff --git a/PaddleNLP/models/matching/lstm.py b/PaddleNLP/shared_modules/models/matching/lstm.py similarity index 100% rename from PaddleNLP/models/matching/lstm.py rename to PaddleNLP/shared_modules/models/matching/lstm.py diff --git a/PaddleNLP/models/matching/mm_dnn.py b/PaddleNLP/shared_modules/models/matching/mm_dnn.py similarity index 100% rename from PaddleNLP/models/matching/mm_dnn.py rename to PaddleNLP/shared_modules/models/matching/mm_dnn.py diff --git a/PaddleNLP/models/neural_machine_translation/transformer/__init__.py b/PaddleNLP/shared_modules/models/matching/optimizers/__init__.py similarity index 100% rename from PaddleNLP/models/neural_machine_translation/transformer/__init__.py rename to PaddleNLP/shared_modules/models/matching/optimizers/__init__.py diff --git a/PaddleNLP/models/matching/optimizers/paddle_optimizers.py b/PaddleNLP/shared_modules/models/matching/optimizers/paddle_optimizers.py similarity index 100% rename from PaddleNLP/models/matching/optimizers/paddle_optimizers.py rename to PaddleNLP/shared_modules/models/matching/optimizers/paddle_optimizers.py diff --git a/PaddleNLP/models/matching/paddle_layers.py b/PaddleNLP/shared_modules/models/matching/paddle_layers.py similarity index 100% rename from PaddleNLP/models/matching/paddle_layers.py rename to PaddleNLP/shared_modules/models/matching/paddle_layers.py diff --git a/PaddleNLP/models/model_check.py b/PaddleNLP/shared_modules/models/model_check.py similarity index 85% rename from PaddleNLP/models/model_check.py rename to PaddleNLP/shared_modules/models/model_check.py index 51713452a7f0b1019c7b8b7d37d24e0c5f15c77c..2aafb31d85c4de54c57447e52415ea3214ce4bd5 100644 --- a/PaddleNLP/models/model_check.py +++ b/PaddleNLP/shared_modules/models/model_check.py @@ -33,20 +33,21 @@ def check_cuda(use_cuda, err = \ except Exception as e: pass + def check_version(): - """ + """ Log error and exit when the installed version of paddlepaddle is not satisfied. """ - err = "PaddlePaddle version 1.6 or higher is required, " \ - "or a suitable develop version is satisfied as well. \n" \ - "Please make sure the version is good with your code." \ + err = "PaddlePaddle version 1.6 or higher is required, " \ + "or a suitable develop version is satisfied as well. \n" \ + "Please make sure the version is good with your code." \ - try: - fluid.require_version('1.6.0') - except Exception as e: - print(err) - sys.exit(1) + try: + fluid.require_version('1.6.0') + except Exception as e: + print(err) + sys.exit(1) def check_version(): diff --git a/PaddleNLP/models/reading_comprehension/__init__.py b/PaddleNLP/shared_modules/models/neural_machine_translation/transformer/__init__.py similarity index 100% rename from PaddleNLP/models/reading_comprehension/__init__.py rename to PaddleNLP/shared_modules/models/neural_machine_translation/transformer/__init__.py diff --git a/PaddleNLP/models/neural_machine_translation/transformer/desc.py b/PaddleNLP/shared_modules/models/neural_machine_translation/transformer/desc.py similarity index 100% rename from PaddleNLP/models/neural_machine_translation/transformer/desc.py rename to PaddleNLP/shared_modules/models/neural_machine_translation/transformer/desc.py diff --git a/PaddleNLP/models/neural_machine_translation/transformer/model.py b/PaddleNLP/shared_modules/models/neural_machine_translation/transformer/model.py similarity index 100% rename from PaddleNLP/models/neural_machine_translation/transformer/model.py rename to PaddleNLP/shared_modules/models/neural_machine_translation/transformer/model.py diff --git a/PaddleNLP/models/representation/__init__.py b/PaddleNLP/shared_modules/models/reading_comprehension/__init__.py similarity index 100% rename from PaddleNLP/models/representation/__init__.py rename to PaddleNLP/shared_modules/models/reading_comprehension/__init__.py diff --git a/PaddleNLP/models/reading_comprehension/bidaf_model.py b/PaddleNLP/shared_modules/models/reading_comprehension/bidaf_model.py similarity index 100% rename from PaddleNLP/models/reading_comprehension/bidaf_model.py rename to PaddleNLP/shared_modules/models/reading_comprehension/bidaf_model.py diff --git a/PaddleNLP/models/sequence_labeling/__init__.py b/PaddleNLP/shared_modules/models/representation/__init__.py similarity index 100% rename from PaddleNLP/models/sequence_labeling/__init__.py rename to PaddleNLP/shared_modules/models/representation/__init__.py diff --git a/PaddleNLP/models/representation/ernie.py b/PaddleNLP/shared_modules/models/representation/ernie.py similarity index 96% rename from PaddleNLP/models/representation/ernie.py rename to PaddleNLP/shared_modules/models/representation/ernie.py index a12c483f0c132c0bf29f89175dbaddef6d6a64b8..a00c9cb0fefd7536567db89eb331a5613f6da01d 100644 --- a/PaddleNLP/models/representation/ernie.py +++ b/PaddleNLP/shared_modules/models/representation/ernie.py @@ -30,10 +30,14 @@ from models.transformer_encoder import encoder, pre_process_layer def ernie_pyreader(args, pyreader_name): """define standard ernie pyreader""" - src_ids = fluid.data(name='1', shape=[-1, args.max_seq_len, 1], dtype='int64') - sent_ids = fluid.data(name='2', shape=[-1, args.max_seq_len, 1], dtype='int64') - pos_ids = fluid.data(name='3', shape=[-1, args.max_seq_len, 1], dtype='int64') - input_mask = fluid.data(name='4', shape=[-1, args.max_seq_len, 1], dtype='float32') + src_ids = fluid.data( + name='1', shape=[-1, args.max_seq_len, 1], dtype='int64') + sent_ids = fluid.data( + name='2', shape=[-1, args.max_seq_len, 1], dtype='int64') + pos_ids = fluid.data( + name='3', shape=[-1, args.max_seq_len, 1], dtype='int64') + input_mask = fluid.data( + name='4', shape=[-1, args.max_seq_len, 1], dtype='float32') labels = fluid.data(name='5', shape=[-1, 1], dtype='int64') seq_lens = fluid.data(name='6', shape=[-1], dtype='int64') diff --git a/PaddleNLP/preprocess/__init__.py b/PaddleNLP/shared_modules/models/sequence_labeling/__init__.py similarity index 100% rename from PaddleNLP/preprocess/__init__.py rename to PaddleNLP/shared_modules/models/sequence_labeling/__init__.py diff --git a/PaddleNLP/models/sequence_labeling/nets.py b/PaddleNLP/shared_modules/models/sequence_labeling/nets.py similarity index 100% rename from PaddleNLP/models/sequence_labeling/nets.py rename to PaddleNLP/shared_modules/models/sequence_labeling/nets.py diff --git a/PaddleNLP/models/transformer_encoder.py b/PaddleNLP/shared_modules/models/transformer_encoder.py similarity index 100% rename from PaddleNLP/models/transformer_encoder.py rename to PaddleNLP/shared_modules/models/transformer_encoder.py diff --git a/PaddleNLP/preprocess/ernie/__init__.py b/PaddleNLP/shared_modules/preprocess/__init__.py similarity index 100% rename from PaddleNLP/preprocess/ernie/__init__.py rename to PaddleNLP/shared_modules/preprocess/__init__.py diff --git a/PaddleNLP/preprocess/tokenizer/conf/customization.dic b/PaddleNLP/shared_modules/preprocess/ernie/__init__.py similarity index 100% rename from PaddleNLP/preprocess/tokenizer/conf/customization.dic rename to PaddleNLP/shared_modules/preprocess/ernie/__init__.py diff --git a/PaddleNLP/preprocess/ernie/task_reader.py b/PaddleNLP/shared_modules/preprocess/ernie/task_reader.py similarity index 99% rename from PaddleNLP/preprocess/ernie/task_reader.py rename to PaddleNLP/shared_modules/preprocess/ernie/task_reader.py index b3a8a0d790eb8ae592167b129b2c707ba2318b6f..38a6e56df56923387f78ebeb9652df5398cb34a5 100644 --- a/PaddleNLP/preprocess/ernie/task_reader.py +++ b/PaddleNLP/shared_modules/preprocess/ernie/task_reader.py @@ -29,6 +29,7 @@ from preprocess.ernie import tokenization from preprocess.padding import pad_batch_data import io + def csv_reader(fd, delimiter='\t'): def gen(): for i in fd: @@ -37,8 +38,10 @@ def csv_reader(fd, delimiter='\t'): yield slots, else: yield slots + return gen() + class BaseReader(object): """BaseReader for classify and sequence labeling task""" diff --git a/PaddleNLP/preprocess/ernie/tokenization.py b/PaddleNLP/shared_modules/preprocess/ernie/tokenization.py similarity index 99% rename from PaddleNLP/preprocess/ernie/tokenization.py rename to PaddleNLP/shared_modules/preprocess/ernie/tokenization.py index 2a06a5818243e4d71aae93fdd1af86c6b14a66b8..08570f30fe9e6a8036a15095e67e6e8dd8686c14 100644 --- a/PaddleNLP/preprocess/ernie/tokenization.py +++ b/PaddleNLP/shared_modules/preprocess/ernie/tokenization.py @@ -23,6 +23,7 @@ import unicodedata import six import io + def convert_to_unicode(text): """Converts `text` to Unicode (if it's not already), assuming utf-8 input.""" if six.PY3: diff --git a/PaddleNLP/preprocess/padding.py b/PaddleNLP/shared_modules/preprocess/padding.py similarity index 100% rename from PaddleNLP/preprocess/padding.py rename to PaddleNLP/shared_modules/preprocess/padding.py diff --git a/PaddleNLP/preprocess/tokenizer/README b/PaddleNLP/shared_modules/preprocess/tokenizer/README similarity index 100% rename from PaddleNLP/preprocess/tokenizer/README rename to PaddleNLP/shared_modules/preprocess/tokenizer/README diff --git a/PaddleNLP/shared_modules/preprocess/tokenizer/conf/customization.dic b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/customization.dic new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/PaddleNLP/preprocess/tokenizer/conf/customization.dic.example b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/customization.dic.example similarity index 100% rename from PaddleNLP/preprocess/tokenizer/conf/customization.dic.example rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/customization.dic.example diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/__model__ b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/__model__ similarity index 100% rename from PaddleNLP/preprocess/tokenizer/conf/model/__model__ rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/__model__ diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/crfw b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/crfw similarity index 100% rename from PaddleNLP/preprocess/tokenizer/conf/model/crfw rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/crfw diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/fc_0.b_0 b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_0.b_0 similarity index 100% rename from PaddleNLP/preprocess/tokenizer/conf/model/fc_0.b_0 rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_0.b_0 diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/fc_0.w_0 b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_0.w_0 similarity index 100% rename from PaddleNLP/preprocess/tokenizer/conf/model/fc_0.w_0 rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_0.w_0 diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/fc_1.b_0 b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_1.b_0 similarity index 100% rename from PaddleNLP/preprocess/tokenizer/conf/model/fc_1.b_0 rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_1.b_0 diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/fc_1.w_0 b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_1.w_0 similarity index 100% rename from PaddleNLP/preprocess/tokenizer/conf/model/fc_1.w_0 rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_1.w_0 diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/fc_2.b_0 b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_2.b_0 similarity index 100% rename from PaddleNLP/preprocess/tokenizer/conf/model/fc_2.b_0 rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_2.b_0 diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/fc_2.w_0 b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_2.w_0 similarity index 100% rename from PaddleNLP/preprocess/tokenizer/conf/model/fc_2.w_0 rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_2.w_0 diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/fc_3.b_0 b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_3.b_0 similarity index 100% rename from PaddleNLP/preprocess/tokenizer/conf/model/fc_3.b_0 rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_3.b_0 diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/fc_3.w_0 b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_3.w_0 similarity index 100% rename from PaddleNLP/preprocess/tokenizer/conf/model/fc_3.w_0 rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_3.w_0 diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/fc_4.b_0 b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_4.b_0 similarity index 100% rename from PaddleNLP/preprocess/tokenizer/conf/model/fc_4.b_0 rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_4.b_0 diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/fc_4.w_0 b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_4.w_0 similarity index 100% rename from PaddleNLP/preprocess/tokenizer/conf/model/fc_4.w_0 rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/fc_4.w_0 diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/gru_0.b_0 b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/gru_0.b_0 similarity index 100% rename from PaddleNLP/preprocess/tokenizer/conf/model/gru_0.b_0 rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/gru_0.b_0 diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/gru_0.w_0 b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/gru_0.w_0 similarity index 100% rename from PaddleNLP/preprocess/tokenizer/conf/model/gru_0.w_0 rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/gru_0.w_0 diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/gru_1.b_0 b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/gru_1.b_0 similarity index 100% rename from PaddleNLP/preprocess/tokenizer/conf/model/gru_1.b_0 rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/gru_1.b_0 diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/gru_1.w_0 b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/gru_1.w_0 similarity index 100% rename from PaddleNLP/preprocess/tokenizer/conf/model/gru_1.w_0 rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/gru_1.w_0 diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/gru_2.b_0 b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/gru_2.b_0 similarity index 100% rename from PaddleNLP/preprocess/tokenizer/conf/model/gru_2.b_0 rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/gru_2.b_0 diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/gru_2.w_0 b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/gru_2.w_0 similarity index 100% rename from PaddleNLP/preprocess/tokenizer/conf/model/gru_2.w_0 rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/gru_2.w_0 diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/gru_3.b_0 b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/gru_3.b_0 similarity index 100% rename from PaddleNLP/preprocess/tokenizer/conf/model/gru_3.b_0 rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/gru_3.b_0 diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/gru_3.w_0 b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/gru_3.w_0 similarity index 100% rename from PaddleNLP/preprocess/tokenizer/conf/model/gru_3.w_0 rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/gru_3.w_0 diff --git a/PaddleNLP/preprocess/tokenizer/conf/model/word_emb b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/word_emb similarity index 100% rename from PaddleNLP/preprocess/tokenizer/conf/model/word_emb rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/model/word_emb diff --git a/PaddleNLP/preprocess/tokenizer/conf/q2b.dic b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/q2b.dic similarity index 100% rename from PaddleNLP/preprocess/tokenizer/conf/q2b.dic rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/q2b.dic diff --git a/PaddleNLP/preprocess/tokenizer/conf/strong_punc.dic b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/strong_punc.dic similarity index 100% rename from PaddleNLP/preprocess/tokenizer/conf/strong_punc.dic rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/strong_punc.dic diff --git a/PaddleNLP/preprocess/tokenizer/conf/tag.dic b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/tag.dic similarity index 100% rename from PaddleNLP/preprocess/tokenizer/conf/tag.dic rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/tag.dic diff --git a/PaddleNLP/preprocess/tokenizer/conf/word.dic b/PaddleNLP/shared_modules/preprocess/tokenizer/conf/word.dic similarity index 100% rename from PaddleNLP/preprocess/tokenizer/conf/word.dic rename to PaddleNLP/shared_modules/preprocess/tokenizer/conf/word.dic diff --git a/PaddleNLP/preprocess/tokenizer/reader.py b/PaddleNLP/shared_modules/preprocess/tokenizer/reader.py similarity index 100% rename from PaddleNLP/preprocess/tokenizer/reader.py rename to PaddleNLP/shared_modules/preprocess/tokenizer/reader.py diff --git a/PaddleNLP/preprocess/tokenizer/test.txt.utf8 b/PaddleNLP/shared_modules/preprocess/tokenizer/test.txt.utf8 similarity index 100% rename from PaddleNLP/preprocess/tokenizer/test.txt.utf8 rename to PaddleNLP/shared_modules/preprocess/tokenizer/test.txt.utf8 diff --git a/PaddleNLP/preprocess/tokenizer/tokenizer.py b/PaddleNLP/shared_modules/preprocess/tokenizer/tokenizer.py similarity index 100% rename from PaddleNLP/preprocess/tokenizer/tokenizer.py rename to PaddleNLP/shared_modules/preprocess/tokenizer/tokenizer.py diff --git a/PaddleNLP/similarity_net/run_classifier.py b/PaddleNLP/similarity_net/run_classifier.py index 9fbb338490afad0a0d438f86c24613464dc6dac8..922b84f859fc4546a6e4fdeeb31fcf8f0d6a5d16 100644 --- a/PaddleNLP/similarity_net/run_classifier.py +++ b/PaddleNLP/similarity_net/run_classifier.py @@ -30,7 +30,7 @@ if sys.getdefaultencoding() != defaultencoding: reload(sys) sys.setdefaultencoding(defaultencoding) -sys.path.append("..") +sys.path.append("../shared_modules/") import paddle import paddle.fluid as fluid @@ -47,18 +47,18 @@ from models.model_check import check_version from models.model_check import check_cuda -def create_model(args, pyreader_name, is_inference = False, is_pointwise = False): +def create_model(args, pyreader_name, is_inference=False, is_pointwise=False): """ Create Model for simnet """ if is_inference: inf_pyreader = fluid.layers.py_reader( - capacity=16, - shapes=([-1,1], [-1,1]), - dtypes=('int64', 'int64'), - lod_levels=(1, 1), - name=pyreader_name, - use_double_buffer=False) + capacity=16, + shapes=([-1, 1], [-1, 1]), + dtypes=('int64', 'int64'), + lod_levels=(1, 1), + name=pyreader_name, + use_double_buffer=False) left, pos_right = fluid.layers.read_file(inf_pyreader) return inf_pyreader, left, pos_right @@ -66,28 +66,30 @@ def create_model(args, pyreader_name, is_inference = False, is_pointwise = False else: if is_pointwise: pointwise_pyreader = fluid.layers.py_reader( - capacity=16, - shapes=([-1,1], [-1,1], [-1,1]), - dtypes=('int64', 'int64', 'int64'), - lod_levels=(1, 1, 0), - name=pyreader_name, - use_double_buffer=False) + capacity=16, + shapes=([-1, 1], [-1, 1], [-1, 1]), + dtypes=('int64', 'int64', 'int64'), + lod_levels=(1, 1, 0), + name=pyreader_name, + use_double_buffer=False) left, right, label = fluid.layers.read_file(pointwise_pyreader) return pointwise_pyreader, left, right, label else: pairwise_pyreader = fluid.layers.py_reader( - capacity=16, - shapes=([-1,1], [-1,1], [-1,1]), - dtypes=('int64', 'int64', 'int64'), - lod_levels=(1, 1, 1), - name=pyreader_name, - use_double_buffer=False) - - left, pos_right, neg_right = fluid.layers.read_file(pairwise_pyreader) + capacity=16, + shapes=([-1, 1], [-1, 1], [-1, 1]), + dtypes=('int64', 'int64', 'int64'), + lod_levels=(1, 1, 1), + name=pyreader_name, + use_double_buffer=False) + + left, pos_right, neg_right = fluid.layers.read_file( + pairwise_pyreader) return pairwise_pyreader, left, pos_right, neg_right - + + def train(conf_dict, args): """ train processic @@ -97,16 +99,16 @@ def train(conf_dict, args): # get vocab size conf_dict['dict_size'] = len(vocab) # Load network structure dynamically - net = utils.import_class("../models/matching", + net = utils.import_class("../shared_modules/models/matching", conf_dict["net"]["module_name"], conf_dict["net"]["class_name"])(conf_dict) # Load loss function dynamically - loss = utils.import_class("../models/matching/losses", + loss = utils.import_class("../shared_modules/models/matching/losses", conf_dict["loss"]["module_name"], conf_dict["loss"]["class_name"])(conf_dict) # Load Optimization method optimizer = utils.import_class( - "../models/matching/optimizers", "paddle_optimizers", + "../shared_modules/models/matching/optimizers", "paddle_optimizers", conf_dict["optimizer"]["class_name"])(conf_dict) # load auc method metric = fluid.metrics.Auc(name="auc") @@ -131,22 +133,23 @@ def train(conf_dict, args): with fluid.program_guard(train_program, startup_prog): with fluid.unique_name.guard(): train_pyreader, left, pos_right, neg_right = create_model( - args, - pyreader_name='train_reader') + args, pyreader_name='train_reader') left_feat, pos_score = net.predict(left, pos_right) pred = pos_score _, neg_score = net.predict(left, neg_right) avg_cost = loss.compute(pos_score, neg_score) avg_cost.persistable = True optimizer.ops(avg_cost) - + # Get Reader - get_train_examples = simnet_process.get_reader("train",epoch=args.epoch) + get_train_examples = simnet_process.get_reader( + "train", epoch=args.epoch) if args.do_valid: test_prog = fluid.Program() with fluid.program_guard(test_prog, startup_prog): with fluid.unique_name.guard(): - test_pyreader, left, pos_right= create_model(args, pyreader_name = 'test_reader',is_inference=True) + test_pyreader, left, pos_right = create_model( + args, pyreader_name='test_reader', is_inference=True) left_feat, pos_score = net.predict(left, pos_right) pred = pos_score test_prog = test_prog.clone(for_test=True) @@ -156,40 +159,41 @@ def train(conf_dict, args): with fluid.program_guard(train_program, startup_prog): with fluid.unique_name.guard(): train_pyreader, left, right, label = create_model( - args, - pyreader_name='train_reader', - is_pointwise=True) + args, pyreader_name='train_reader', is_pointwise=True) left_feat, pred = net.predict(left, right) avg_cost = loss.compute(pred, label) avg_cost.persistable = True optimizer.ops(avg_cost) # Get Feeder and Reader - get_train_examples = simnet_process.get_reader("train",epoch=args.epoch) + get_train_examples = simnet_process.get_reader( + "train", epoch=args.epoch) if args.do_valid: test_prog = fluid.Program() with fluid.program_guard(test_prog, startup_prog): with fluid.unique_name.guard(): - test_pyreader, left, right= create_model(args, pyreader_name = 'test_reader',is_inference=True) + test_pyreader, left, right = create_model( + args, pyreader_name='test_reader', is_inference=True) left_feat, pred = net.predict(left, right) test_prog = test_prog.clone(for_test=True) if args.init_checkpoint is not "": - utils.init_checkpoint(exe, args.init_checkpoint, - startup_prog) + utils.init_checkpoint(exe, args.init_checkpoint, startup_prog) - def valid_and_test(test_program, test_pyreader, get_valid_examples, process, mode, exe, fetch_list): + def valid_and_test(test_program, test_pyreader, get_valid_examples, process, + mode, exe, fetch_list): """ return auc and acc """ # Get Batch Data - batch_data = fluid.io.batch(get_valid_examples, args.batch_size, drop_last=False) + batch_data = fluid.io.batch( + get_valid_examples, args.batch_size, drop_last=False) test_pyreader.decorate_paddle_reader(batch_data) test_pyreader.start() pred_list = [] while True: try: - _pred = exe.run(program=test_program,fetch_list=[pred.name]) + _pred = exe.run(program=test_program, fetch_list=[pred.name]) pred_list += list(_pred) except fluid.core.EOFException: test_pyreader.reset() @@ -222,11 +226,12 @@ def train(conf_dict, args): #for epoch_id in range(args.epoch): # used for continuous evaluation if args.enable_ce: - train_batch_data = fluid.io.batch(get_train_examples, args.batch_size, drop_last=False) + train_batch_data = fluid.io.batch( + get_train_examples, args.batch_size, drop_last=False) else: train_batch_data = fluid.io.batch( fluid.io.shuffle( - get_train_examples, buf_size=10000), + get_train_examples, buf_size=10000), args.batch_size, drop_last=False) train_pyreader.decorate_paddle_reader(train_batch_data) @@ -238,25 +243,29 @@ def train(conf_dict, args): try: global_step += 1 fetch_list = [avg_cost.name] - avg_loss = train_exe.run(program=train_program, fetch_list = fetch_list) + avg_loss = train_exe.run(program=train_program, + fetch_list=fetch_list) losses.append(np.mean(avg_loss[0])) if args.do_valid and global_step % args.validation_steps == 0: get_valid_examples = simnet_process.get_reader("valid") - valid_result = valid_and_test(test_prog,test_pyreader,get_valid_examples,simnet_process,"valid",exe,[pred.name]) + valid_result = valid_and_test( + test_prog, test_pyreader, get_valid_examples, + simnet_process, "valid", exe, [pred.name]) if args.compute_accuracy: valid_auc, valid_acc = valid_result logging.info( - "global_steps: %d, valid_auc: %f, valid_acc: %f, valid_loss: %f" % - (global_step, valid_auc, valid_acc, np.mean(losses))) + "global_steps: %d, valid_auc: %f, valid_acc: %f, valid_loss: %f" + % (global_step, valid_auc, valid_acc, np.mean(losses))) else: valid_auc = valid_result - logging.info("global_steps: %d, valid_auc: %f, valid_loss: %f" % - (global_step, valid_auc, np.mean(losses))) + logging.info( + "global_steps: %d, valid_auc: %f, valid_loss: %f" % + (global_step, valid_auc, np.mean(losses))) if global_step % args.save_steps == 0: model_save_dir = os.path.join(args.output_dir, - conf_dict["model_path"]) + conf_dict["model_path"]) model_path = os.path.join(model_save_dir, str(global_step)) - + if not os.path.exists(model_save_dir): os.makedirs(model_save_dir) if args.task_mode == "pairwise": @@ -269,21 +278,19 @@ def train(conf_dict, args): ] target_vars = [left_feat, pred] fluid.io.save_inference_model(model_path, feed_var_names, - target_vars, exe, - test_prog) + target_vars, exe, test_prog) logging.info("saving infer model in %s" % model_path) - + except fluid.core.EOFException: train_pyreader.reset() break end_time = time.time() #logging.info("epoch: %d, loss: %f, used time: %d sec" % - #(epoch_id, np.mean(losses), end_time - start_time)) + #(epoch_id, np.mean(losses), end_time - start_time)) ce_info.append([np.mean(losses), end_time - start_time]) #final save - logging.info("the final step is %s" % global_step) - model_save_dir = os.path.join(args.output_dir, - conf_dict["model_path"]) + logging.info("the final step is %s" % global_step) + model_save_dir = os.path.join(args.output_dir, conf_dict["model_path"]) model_path = os.path.join(model_save_dir, str(global_step)) if not os.path.exists(model_save_dir): os.makedirs(model_save_dir) @@ -296,9 +303,8 @@ def train(conf_dict, args): right.name, ] target_vars = [left_feat, pred] - fluid.io.save_inference_model(model_path, feed_var_names, - target_vars, exe, - test_prog) + fluid.io.save_inference_model(model_path, feed_var_names, target_vars, exe, + test_prog) logging.info("saving infer model in %s" % model_path) # used for continuous evaluation if args.enable_ce: @@ -322,7 +328,9 @@ def train(conf_dict, args): else: # Get Feeder and Reader get_test_examples = simnet_process.get_reader("test") - test_result = valid_and_test(test_prog,test_pyreader,get_test_examples,simnet_process,"test",exe,[pred.name]) + test_result = valid_and_test(test_prog, test_pyreader, + get_test_examples, simnet_process, "test", + exe, [pred.name]) if args.compute_accuracy: test_auc, test_acc = test_result logging.info("AUC of test is %f, Accuracy of test is %f" % @@ -344,16 +352,17 @@ def test(conf_dict, args): vocab = utils.load_vocab(args.vocab_path) simnet_process = reader.SimNetProcessor(args, vocab) - + startup_prog = fluid.Program() get_test_examples = simnet_process.get_reader("test") - batch_data = fluid.io.batch(get_test_examples, args.batch_size, drop_last=False) + batch_data = fluid.io.batch( + get_test_examples, args.batch_size, drop_last=False) test_prog = fluid.Program() conf_dict['dict_size'] = len(vocab) - net = utils.import_class("../models/matching", + net = utils.import_class("../shared_modules/models/matching", conf_dict["net"]["module_name"], conf_dict["net"]["class_name"])(conf_dict) @@ -364,9 +373,7 @@ def test(conf_dict, args): with fluid.program_guard(test_prog, startup_prog): with fluid.unique_name.guard(): test_pyreader, left, pos_right = create_model( - args, - pyreader_name = 'test_reader', - is_inference=True) + args, pyreader_name='test_reader', is_inference=True) left_feat, pos_score = net.predict(left, pos_right) pred = pos_score test_prog = test_prog.clone(for_test=True) @@ -375,19 +382,14 @@ def test(conf_dict, args): with fluid.program_guard(test_prog, startup_prog): with fluid.unique_name.guard(): test_pyreader, left, right = create_model( - args, - pyreader_name = 'test_reader', - is_inference=True) + args, pyreader_name='test_reader', is_inference=True) left_feat, pred = net.predict(left, right) test_prog = test_prog.clone(for_test=True) exe.run(startup_prog) - utils.init_checkpoint( - exe, - args.init_checkpoint, - main_program=test_prog) - + utils.init_checkpoint(exe, args.init_checkpoint, main_program=test_prog) + test_exe = exe test_pyreader.decorate_paddle_reader(batch_data) @@ -398,15 +400,18 @@ def test(conf_dict, args): output = [] while True: try: - output = test_exe.run(program=test_prog,fetch_list=fetch_list) + output = test_exe.run(program=test_prog, fetch_list=fetch_list) if args.task_mode == "pairwise": - pred_list += list(map(lambda item: float(item[0]), output[0])) + pred_list += list( + map(lambda item: float(item[0]), output[0])) predictions_file.write(u"\n".join( - map(lambda item: str((item[0] + 1) / 2), output[0])) + "\n") + map(lambda item: str((item[0] + 1) / 2), output[0])) + + "\n") else: pred_list += map(lambda item: item, output[0]) predictions_file.write(u"\n".join( - map(lambda item: str(np.argmax(item)), output[0])) + "\n") + map(lambda item: str(np.argmax(item)), output[0])) + + "\n") except fluid.core.EOFException: test_pyreader.reset() break @@ -450,37 +455,37 @@ def infer(conf_dict, args): startup_prog = fluid.Program() get_infer_examples = simnet_process.get_infer_reader - batch_data = fluid.io.batch(get_infer_examples, args.batch_size, drop_last=False) + batch_data = fluid.io.batch( + get_infer_examples, args.batch_size, drop_last=False) test_prog = fluid.Program() conf_dict['dict_size'] = len(vocab) - net = utils.import_class("../models/matching", + net = utils.import_class("../shared_modules/models/matching", conf_dict["net"]["module_name"], conf_dict["net"]["class_name"])(conf_dict) if args.task_mode == "pairwise": with fluid.program_guard(test_prog, startup_prog): with fluid.unique_name.guard(): - infer_pyreader, left, pos_right = create_model(args, pyreader_name = 'infer_reader', is_inference = True) + infer_pyreader, left, pos_right = create_model( + args, pyreader_name='infer_reader', is_inference=True) left_feat, pos_score = net.predict(left, pos_right) pred = pos_score test_prog = test_prog.clone(for_test=True) else: with fluid.program_guard(test_prog, startup_prog): with fluid.unique_name.guard(): - infer_pyreader, left, right = create_model(args, pyreader_name = 'infer_reader', is_inference = True) + infer_pyreader, left, right = create_model( + args, pyreader_name='infer_reader', is_inference=True) left_feat, pred = net.predict(left, right) test_prog = test_prog.clone(for_test=True) exe.run(startup_prog) - utils.init_checkpoint( - exe, - args.init_checkpoint, - main_program=test_prog) - + utils.init_checkpoint(exe, args.init_checkpoint, main_program=test_prog) + test_exe = exe infer_pyreader.decorate_sample_list_generator(batch_data) @@ -490,16 +495,16 @@ def infer(conf_dict, args): output = [] infer_pyreader.start() while True: - try: - output = test_exe.run(program=test_prog,fetch_list=fetch_list) - if args.task_mode == "pairwise": - preds_list += list( - map(lambda item: str((item[0] + 1) / 2), output[0])) - else: - preds_list += map(lambda item: str(np.argmax(item)), output[0]) - except fluid.core.EOFException: - infer_pyreader.reset() - break + try: + output = test_exe.run(program=test_prog, fetch_list=fetch_list) + if args.task_mode == "pairwise": + preds_list += list( + map(lambda item: str((item[0] + 1) / 2), output[0])) + else: + preds_list += map(lambda item: str(np.argmax(item)), output[0]) + except fluid.core.EOFException: + infer_pyreader.reset() + break with io.open(args.infer_result_path, "w", encoding="utf8") as infer_file: for _data, _pred in zip(simnet_process.get_infer_data(), preds_list): infer_file.write(_data + "\t" + _pred + "\n") @@ -514,6 +519,7 @@ def get_cards(): num = len(cards.split(",")) return num + if __name__ == "__main__": args = ArgConfig() @@ -532,4 +538,4 @@ if __name__ == "__main__": infer(conf_dict, args) else: raise ValueError( - "one of do_train and do_test and do_infer must be True") \ No newline at end of file + "one of do_train and do_test and do_infer must be True") diff --git a/README.md b/README.md index 4701c4dac40421bac64de174725e7c172b975985..8d4eb077d5e04bd106919580a9e2ff741d8bc144 100644 --- a/README.md +++ b/README.md @@ -146,10 +146,10 @@ PaddlePaddle 提供了丰富的计算单元,使得用户可以采用模块化 ## PaddleNLP -[**PaddleNLP**](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP) 是基于 PaddlePaddle 深度学习框架开发的自然语言处理 (NLP) 工具,算法,模型和数据的开源项目。百度在 NLP 领域十几年的深厚积淀为 PaddleNLP 提供了强大的核心动力。使用 PaddleNLP,您可以得到: +[**PaddleNLP**](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP) 是基于 PaddlePaddle 深度学习框架开发的自然语言处理 (NLP) 工具,算法,模型和数据的开源项目。百度在 NLP 领域十几年的深厚积淀为 PaddleNLP 提供了强大的核心动力。使用 PaddleNLP,您可以得到: - **丰富而全面的 NLP 任务支持:** - - PaddleNLP 为您提供了多粒度,多场景的应用支持。涵盖了从[分词](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/lexical_analysis),[词性标注](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/lexical_analysis),[命名实体识别](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/lexical_analysis)等 NLP 基础技术,到[文本分类](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/sentiment_classification),[文本相似度计算](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/similarity_net),[语义表示](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleLARK),[文本生成](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleTextGEN)等 NLP 核心技术。同时,PaddleNLP 还提供了针对常见 NLP 大型应用系统(如[阅读理解](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleMRC),[对话系统](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleDialogue),[机器翻译系统](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleMT)等)的特定核心技术和工具组件,模型和预训练参数等,让您在 NLP 领域畅通无阻。 + - PaddleNLP 为您提供了多粒度,多场景的应用支持。涵盖了从[分词](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/lexical_analysis),[词性标注](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/lexical_analysis),[命名实体识别](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/lexical_analysis)等 NLP 基础技术,到[文本分类](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/sentiment_classification),[文本相似度计算](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/similarity_net),[语义表示](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/pretrain_langauge_models),[文本生成](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/seq2seq)等 NLP 核心技术。同时,PaddleNLP 还提供了针对常见 NLP 大型应用系统(如[阅读理解](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/machine_reading_comprehension),[对话系统](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/dialogue_system),[机器翻译系统](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/machine_translation)等)的特定核心技术和工具组件,模型和预训练参数等,让您在 NLP 领域畅通无阻。 - **稳定可靠的 NLP 模型和强大的预训练参数:** - PaddleNLP集成了百度内部广泛使用的 NLP 工具模型,为您提供了稳定可靠的 NLP 算法解决方案。基于百亿级数据的预训练参数和丰富的预训练模型,助您轻松提高模型效果,为您的 NLP 业务注入强大动力。 - **持续改进和技术支持,零基础搭建 NLP 应用:** @@ -159,30 +159,30 @@ PaddlePaddle 提供了丰富的计算单元,使得用户可以采用模块化 | 任务类型 | 目录 | 简介 | | ------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | -| 中文词法分析 | [LAC(Lexical Analysis of Chinese)](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/lexical_analysis) | 百度自主研发中文特色模型词法分析任务,集成了中文分词、词性标注和命名实体识别任务。输入是一个字符串,而输出是句子中的词边界和词性、实体类别。 | -| 词向量 | [Word2vec](https://github.com/PaddlePaddle/models/tree/develop/PaddleRec/word2vec) | 提供单机多卡,多机等分布式训练中文词向量能力,支持主流词向量模型(skip-gram,cbow等),可以快速使用自定义数据训练词向量模型。 | -| 语言模型 | [Language_model](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/language_model) | 给定一个输入词序列(中文需要先分词、英文需要先 tokenize),计算其生成概率。 语言模型的评价指标 PPL(困惑度),用于表示模型生成句子的流利程度。 | +| 中文词法分析 | [LAC(Lexical Analysis of Chinese)](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/lexical_analysis) | 百度自主研发中文特色模型词法分析任务,集成了中文分词、词性标注和命名实体识别任务。输入是一个字符串,而输出是句子中的词边界和词性、实体类别。 | +| 词向量 | [Word2vec](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleRec/word2vec) | 提供单机多卡,多机等分布式训练中文词向量能力,支持主流词向量模型(skip-gram,cbow等),可以快速使用自定义数据训练词向量模型。 | +| 语言模型 | [Language_model](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/language_model) | 给定一个输入词序列(中文需要先分词、英文需要先 tokenize),计算其生成概率。 语言模型的评价指标 PPL(困惑度),用于表示模型生成句子的流利程度。 | ### NLP 核心技术 #### 语义表示 -[PaddleLARK](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleLARK) (Paddle LAngauge Representation ToolKit) 是传统语言模型的进一步发展,通过在大规模语料上训练得到的通用的语义表示模型,可以助益其他自然语言处理任务,是通用预训练 + 特定任务精调范式的体现。PaddleLARK 集成了 ELMO,BERT,ERNIE 1.0,ERNIE 2.0,XLNet 等热门中英文预训练模型。 +[PaddleLARK](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/pretrain_language_models) 通过在大规模语料上训练得到的通用的语义表示模型,可以助益其他自然语言处理任务,是通用预训练 + 特定任务精调范式的体现。PaddleLARK 集成了 ELMO,BERT,ERNIE 1.0,ERNIE 2.0,XLNet 等热门中英文预训练模型。 | 模型 | 简介 | | ------------------------------------------------------------ | ------------------------------------------------------------ | | [ERNIE](https://github.com/PaddlePaddle/ERNIE)(Enhanced Representation from kNowledge IntEgration) | 百度自研的语义表示模型,通过建模海量数据中的词、实体及实体关系,学习真实世界的语义知识。相较于 BERT 学习原始语言信号,ERNIE 直接对先验语义知识单元进行建模,增强了模型语义表示能力。 | -| [BERT](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleLARK/BERT)(Bidirectional Encoder Representation from Transformers) | 一个迁移能力很强的通用语义表示模型, 以 Transformer 为网络基本组件,以双向 Masked Language Model和 Next Sentence Prediction 为训练目标,通过预训练得到通用语义表示,再结合简单的输出层,应用到下游的 NLP 任务,在多个任务上取得了 SOTA 的结果。 | -| [XLNet](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleLARK/XLNet)(XLNet: Generalized Autoregressive Pretraining for Language Understanding) | 重要的语义表示模型之一,引入 Transformer-XL 为骨架,以 Permutation Language Modeling 为优化目标,在若干下游任务上优于 BERT 的性能。 | -| [ELMo](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleLARK/ELMo)(Embeddings from Language Models) | 重要的通用语义表示模型之一,以双向 LSTM 为网路基本组件,以 Language Model 为训练目标,通过预训练得到通用的语义表示,将通用的语义表示作为 Feature 迁移到下游 NLP 任务中,会显著提升下游任务的模型性能。 | +| [BERT](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/pretrain_language_models/BERT)(Bidirectional Encoder Representation from Transformers) | 一个迁移能力很强的通用语义表示模型, 以 Transformer 为网络基本组件,以双向 Masked Language Model和 Next Sentence Prediction 为训练目标,通过预训练得到通用语义表示,再结合简单的输出层,应用到下游的 NLP 任务,在多个任务上取得了 SOTA 的结果。 | +| [XLNet](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/pretrain_language_models/XLNet)(XLNet: Generalized Autoregressive Pretraining for Language Understanding) | 重要的语义表示模型之一,引入 Transformer-XL 为骨架,以 Permutation Language Modeling 为优化目标,在若干下游任务上优于 BERT 的性能。 | +| [ELMo](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/pretrain_language_models/ELMo)(Embeddings from Language Models) | 重要的通用语义表示模型之一,以双向 LSTM 为网路基本组件,以 Language Model 为训练目标,通过预训练得到通用的语义表示,将通用的语义表示作为 Feature 迁移到下游 NLP 任务中,会显著提升下游任务的模型性能。 | #### 文本相似度计算 -[SimNet](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/similarity_net) (Similarity Net) 是一个计算短文本相似度的框架,主要包括 BOW、CNN、RNN、MMDNN 等核心网络结构形式。SimNet 框架在百度各产品上广泛应用,提供语义相似度计算训练和预测框架,适用于信息检索、新闻推荐、智能客服等多个应用场景,帮助企业解决语义匹配问题。 +[SimNet](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/similarity_net) (Similarity Net) 是一个计算短文本相似度的框架,主要包括 BOW、CNN、RNN、MMDNN 等核心网络结构形式。SimNet 框架在百度各产品上广泛应用,提供语义相似度计算训练和预测框架,适用于信息检索、新闻推荐、智能客服等多个应用场景,帮助企业解决语义匹配问题。 #### 文本生成 -[PaddleTextGEN](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleTextGEN) (Paddle Text Generation) ,一个基于 PaddlePaddle 的文本生成框架,提供了一些列经典文本生成模型案例,如 vanilla seq2seq,seq2seq with attention,variational seq2seq 模型等。 +[seq2seq](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/seq2seq) (Paddle Text Generation) ,一个基于 PaddlePaddle 的文本生成框架,提供了一些列经典文本生成模型案例,如 vanilla seq2seq,seq2seq with attention,variational seq2seq 模型等。 ### NLP 系统应用 @@ -190,35 +190,35 @@ PaddlePaddle 提供了丰富的计算单元,使得用户可以采用模块化 | 模型 | 简介 | | ------------------------------------------------------------ | ------------------------------------------------------------ | -| [Senta](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/sentiment_classification) (Sentiment Classification,简称Senta) | 面向**通用场景**的情感分类模型,针对带有主观描述的中文文本,可自动判断该文本的情感极性类别。 | -| [EmotionDetection](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/emotion_detection) (Emotion Detection,简称EmoTect) | 专注于识别**人机对话场景**中用户的情绪,针对智能对话场景中的用户文本,自动判断该文本的情绪类别。 | +| [Senta](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/sentiment_classification) (Sentiment Classification,简称Senta) | 面向**通用场景**的情感分类模型,针对带有主观描述的中文文本,可自动判断该文本的情感极性类别。 | +| [EmotionDetection](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/emotion_detection) (Emotion Detection,简称EmoTect) | 专注于识别**人机对话场景**中用户的情绪,针对智能对话场景中的用户文本,自动判断该文本的情绪类别。 | #### 阅读理解 -[PaddleMRC](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleMRC) (Paddle Machine Reading Comprehension),集合了百度在阅读理解领域相关的模型,工具,开源数据集等一系列工作。 +[machine_reading_comprehension](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/machine_reading_comprehension) (Paddle Machine Reading Comprehension),集合了百度在阅读理解领域相关的模型,工具,开源数据集等一系列工作。 | 模型 | 简介 | | ------------------------------------------------------------ | ------------------------------------------------------------ | -| [DuReader](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/Research/ACL2018-DuReader) | 包含百度开源的基于真实搜索用户行为的中文大规模阅读理解数据集以及基线模型。 | -| [KT-Net](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/Research/ACL2019-KTNET) | 结合知识的阅读理解模型,Squad 曾排名第一。 | -| [D-Net](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/Research/MRQA2019-D-NET) | 阅读理解十项全能模型,在 EMNLP2019 国际阅读理解大赛夺得 10 项冠军。 | +| [DuReader](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/Research/ACL2018-DuReader) | 包含百度开源的基于真实搜索用户行为的中文大规模阅读理解数据集以及基线模型。 | +| [KT-Net](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/Research/ACL2019-KTNET) | 结合知识的阅读理解模型,Squad 曾排名第一。 | +| [D-Net](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/Research/MRQA2019-D-NET) | 阅读理解十项全能模型,在 EMNLP2019 国际阅读理解大赛夺得 10 项冠军。 | #### 机器翻译 -[PaddleMT](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleMT) ,全称为Paddle Machine Translation,基于Transformer的经典机器翻译模型,基于论文 [Attention Is All You Need](https://arxiv.org/abs/1706.03762)。 +[machine_translation](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/machine_translation) ,全称为Paddle Machine Translation,基于Transformer的经典机器翻译模型,基于论文 [Attention Is All You Need](https://arxiv.org/abs/1706.03762)。 #### 对话系统 -[PaddleDialogue](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleDialogue) 包含对话系统方向的模型、数据集和工具。 +[dialogue_system](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/dialogue_system) 包含对话系统方向的模型、数据集和工具。 | 模型 | 简介 | | ------------------------------------------------------------ | ------------------------------------------------------------ | -| [DGU](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleDialogue/dialogue_general_understanding) (Dialogue General Understanding,通用对话理解模型) | 覆盖了包括**检索式聊天系统**中 context-response matching 任务和**任务完成型对话系统**中**意图识别**,**槽位解析**,**状态追踪**等常见对话系统任务,在 6 项国际公开数据集中都获得了最佳效果。 | -| [ADEM](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/PaddleDialogue/auto_dialogue_evaluation) (Auto Dialogue Evaluation Model) | 评估开放领域对话系统的回复质量,能够帮助企业或个人快速评估对话系统的回复质量,减少人工评估成本。 | -| [Proactive Conversation](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/Research/ACL2019-DuConv) | 包含百度开源的知识驱动的开放领域对话数据集 [DuConv](https://ai.baidu.com/broad/subordinate?dataset=duconv),以及基线模型。对应论文 [Proactive Human-Machine Conversation with Explicit Conversation Goals](https://arxiv.org/abs/1906.05572) 发表于 ACL2019。 | -| [DAM](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/Research/ACL2018-DAM)(Deep Attention Matching Network,深度注意力机制模型) | 开放领域多轮对话匹配模型,对应论文 [Multi-Turn Response Selection for Chatbots with Deep Attention Matching Network](https://aclweb.org/anthology/P18-1103/) 发表于 ACL2018。 | +| [DGU](https://github.com/PaddlePaddle/models/tree/develop/release/1.7/dialogue_system/dialogue_general_understanding) (Dialogue General Understanding,通用对话理解模型) | 覆盖了包括**检索式聊天系统**中 context-response matching 任务和**任务完成型对话系统**中**意图识别**,**槽位解析**,**状态追踪**等常见对话系统任务,在 6 项国际公开数据集中都获得了最佳效果。 | +| [ADEM](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/dialogue_system/auto_dialogue_evaluation) (Auto Dialogue Evaluation Model) | 评估开放领域对话系统的回复质量,能够帮助企业或个人快速评估对话系统的回复质量,减少人工评估成本。 | +| [Proactive Conversation](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/Research/ACL2019-DuConv) | 包含百度开源的知识驱动的开放领域对话数据集 [DuConv](https://ai.baidu.com/broad/subordinate?dataset=duconv),以及基线模型。对应论文 [Proactive Human-Machine Conversation with Explicit Conversation Goals](https://arxiv.org/abs/1906.05572) 发表于 ACL2019。 | +| [DAM](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/Research/ACL2018-DAM)(Deep Attention Matching Network,深度注意力机制模型) | 开放领域多轮对话匹配模型,对应论文 [Multi-Turn Response Selection for Chatbots with Deep Attention Matching Network](https://aclweb.org/anthology/P18-1103/) 发表于 ACL2018。 | -百度最新前沿工作开源,请参考 [Research](https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/Research)。 +百度最新前沿工作开源,请参考 [Research](https://github.com/PaddlePaddle/models/tree/release/1.7/PaddleNLP/Research)。 ## PaddleRec