From aafcb3265818817df0defb82b8772b6f2987f4e2 Mon Sep 17 00:00:00 2001 From: Cheerego <35982308+shanyi15@users.noreply.github.com> Date: Mon, 22 Apr 2019 17:06:46 +0800 Subject: [PATCH] update_models_link (#825) --- doc/fluid/user_guides/models/index_cn.rst | 12 +++++------- doc/fluid/user_guides/models/index_en.rst | 6 +++--- 2 files changed, 8 insertions(+), 10 deletions(-) diff --git a/doc/fluid/user_guides/models/index_cn.rst b/doc/fluid/user_guides/models/index_cn.rst index 0d744ef4b..8e8695f83 100644 --- a/doc/fluid/user_guides/models/index_cn.rst +++ b/doc/fluid/user_guides/models/index_cn.rst @@ -14,8 +14,7 @@ Fluid模型配置和参数文件的工具。 - `AlexNet `__ - `VGG `__ - `GoogleNet `__ -- `Residual - Network `__ +- `Residual Network `__ - `Inception-v4 `__ - `MobileNet `__ - `Dual Path @@ -122,7 +121,7 @@ RNN 结构的 NMT 得以应运而生,例如基于卷积神经网络 CNN Attention 学习语言中的上下文依赖。相较于RNN/CNN, 这种结构在单层内计算复杂度更低、易于并行化、对长程依赖更易建模,最终在多种语言之间取得了最好的翻译效果。 -- `Transformer `__ +- `Transformer `__ 强化学习 -------- @@ -163,7 +162,7 @@ DQN 及其变体,并测试了它们在 Atari 游戏中的表现。 本例所开放的DAM (Deep Attention Matching Network)为百度自然语言处理部发表于ACL-2018的工作,用于检索式聊天机器人多轮对话中应答的选择。DAM受Transformer的启发,其网络结构完全基于注意力(attention)机制,利用栈式的self-attention结构分别学习不同粒度下应答和语境的语义表示,然后利用cross-attention获取应答与语境之间的相关性,在两个大规模多轮对话数据集上的表现均好于其它模型。 -- `Deep Attention Matching Network `__ +- `Deep Attention Matching Network `__ AnyQ ---- @@ -174,8 +173,7 @@ AnyQ SimNet是百度自然语言处理部于2013年自主研发的语义匹配框架,该框架在百度各产品上广泛应用,主要包括BOW、CNN、RNN、MM-DNN等核心网络结构形式,同时基于该框架也集成了学术界主流的语义匹配模型,如MatchPyramid、MV-LSTM、K-NRM等模型。使用SimNet构建出的模型可以便捷的加入AnyQ系统中,增强AnyQ系统的语义匹配能力。 -- `SimNet in PaddlePaddle - Fluid `__ +- `SimNet in PaddlePaddle Fluid `_ 机器阅读理解 ---- @@ -184,7 +182,7 @@ SimNet是百度自然语言处理部于2013年自主研发的语义匹配框架 百度阅读理解数据集是由百度自然语言处理部开源的一个真实世界数据集,所有的问题、原文都来源于实际数据(百度搜索引擎数据和百度知道问答社区),答案是由人类回答的。每个问题都对应多个答案,数据集包含200k问题、1000k原文和420k答案,是目前最大的中文MRC数据集。百度同时开源了对应的阅读理解模型,称为DuReader,采用当前通用的网络分层结构,通过双向attention机制捕捉问题和原文之间的交互关系,生成query-aware的原文表示,最终基于query-aware的原文表示通过point network预测答案范围。 -- `DuReader in PaddlePaddle Fluid `__ +- `DuReader in PaddlePaddle Fluid `_ 个性化推荐 diff --git a/doc/fluid/user_guides/models/index_en.rst b/doc/fluid/user_guides/models/index_en.rst index ce59eec78..5e5e88eef 100644 --- a/doc/fluid/user_guides/models/index_en.rst +++ b/doc/fluid/user_guides/models/index_en.rst @@ -97,7 +97,7 @@ Machine Translation transforms a natural language (source language) into another The Transformer implemented in this example is a machine translation model based on the self-attention mechanism, in which there is no more RNN or CNN structure, but fully utilizes Attention to learn the context dependency. Compared with RNN/CNN, in a single layer, this structure has lower computational complexity, easier parallelization, and easier modeling for long-range dependencies, and finally achieves the best translation effect among multiple languages. -- `Transformer `__ +- `Transformer `__ Reinforcement learning ------------------------- @@ -131,7 +131,7 @@ In many scenarios of natural language processing, it is necessary to measure the The DAM (Deep Attention Matching Network) introduced in this example is the work of Baidu Natural Language Processing Department published in ACL-2018, which is used for the selection of responses in multi-round dialogue of retrieval chat robots. Inspired by Transformer, DAM is based entirely on the attention mechanism. It uses the stack-type self-attention structure to learn the semantic representations of responses and contexts at different granularities, and then uses cross-attention to obtain relativity between responses and contexts. The performance on the two large-scale multi-round dialogue datasets is better than other models. -- `Deep Attention Matching Network `__ +- `Deep Attention Matching Network `__ AnyQ ---- @@ -151,7 +151,7 @@ Machine Reading Comprehension (MRC) is one of the core tasks in Natural Language Baidu reading comprehension dataset is an open-source real-world dataset publicized by Baidu Natural Language Processing Department. All the questions and original texts are derived from actual data (Baidu search engine data and Baidu know Q&A community), and the answer is given by humans. Each question corresponds to multiple answers. The dataset contains 200k questions, 1000k original text and 420k answers. It is currently the largest Chinese MRC dataset. Baidu also publicized the corresponding open-source reading comprehension model, called DuReader. DuReader adopts the current common network hierarchical structure, and captures the interaction between the problems and the original texts through the double attention mechanism to generate the original representation of the query-aware. Finally, based on the original text of query-aware, the answer scope is predicted by point network. -- `DuReader in PaddlePaddle Fluid `__ +- `DuReader in PaddlePaddle Fluid `__ Personalized recommendation -- GitLab