Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
BaiXuePrincess
Paddle
提交
a0fbc1e1
P
Paddle
项目概览
BaiXuePrincess
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
a0fbc1e1
编写于
5月 01, 2017
作者:
Y
Yu Yang
提交者:
GitHub
5月 01, 2017
浏览文件
操作
浏览文件
下载
差异文件
Merge branch 'release/0.10.0' into release_note
上级
16f8bc53
dd32909a
变更
22
隐藏空白更改
内联
并排
Showing
22 changed file
with
361 addition
and
151 deletion
+361
-151
RELEASE.md
RELEASE.md
+15
-0
demo/seqToseq/seqToseq_net.py
demo/seqToseq/seqToseq_net.py
+18
-5
doc/getstarted/index_cn.rst
doc/getstarted/index_cn.rst
+3
-2
doc/getstarted/index_en.rst
doc/getstarted/index_en.rst
+3
-2
doc/howto/deep_model/rnn/hierarchical_layer_cn.rst
doc/howto/deep_model/rnn/hierarchical_layer_cn.rst
+14
-14
doc/howto/deep_model/rnn/index_cn.rst
doc/howto/deep_model/rnn/index_cn.rst
+0
-1
doc/howto/deep_model/rnn/index_en.rst
doc/howto/deep_model/rnn/index_en.rst
+0
-5
doc/howto/dev/contribute_to_paddle_cn.md
doc/howto/dev/contribute_to_paddle_cn.md
+169
-80
doc/howto/usage/k8s/k8s_basis_cn.md
doc/howto/usage/k8s/k8s_basis_cn.md
+5
-5
doc/index_cn.rst
doc/index_cn.rst
+0
-1
doc/index_en.rst
doc/index_en.rst
+0
-2
doc_theme/templates/layout.html
doc_theme/templates/layout.html
+4
-6
paddle/gserver/tests/sequence_layer_group.conf
paddle/gserver/tests/sequence_layer_group.conf
+1
-2
paddle/gserver/tests/sequence_nest_layer_group.conf
paddle/gserver/tests/sequence_nest_layer_group.conf
+1
-2
paddle/scripts/travis/docs.sh
paddle/scripts/travis/docs.sh
+1
-0
paddle/trainer/tests/CMakeLists.txt
paddle/trainer/tests/CMakeLists.txt
+11
-8
python/CMakeLists.txt
python/CMakeLists.txt
+6
-3
python/paddle/trainer_config_helpers/attrs.py
python/paddle/trainer_config_helpers/attrs.py
+9
-6
python/paddle/trainer_config_helpers/layers.py
python/paddle/trainer_config_helpers/layers.py
+74
-1
python/paddle/trainer_config_helpers/networks.py
python/paddle/trainer_config_helpers/networks.py
+16
-6
python/paddle/trainer_config_helpers/tests/configs/protostr/projections.protostr
...onfig_helpers/tests/configs/protostr/projections.protostr
+1
-0
python/paddle/v2/layer.py
python/paddle/v2/layer.py
+10
-0
未找到文件。
RELEASE.md
浏览文件 @
a0fbc1e1
...
@@ -7,6 +7,11 @@
...
@@ -7,6 +7,11 @@
*
Support rectangle input for CNN.
*
Support rectangle input for CNN.
*
Support stride pooling for seqlastin and seqfirstin.
*
Support stride pooling for seqlastin and seqfirstin.
*
Expose seq_concat_layer/seq_reshape_layer in
`trainer_config_helpers`
.
*
Expose seq_concat_layer/seq_reshape_layer in
`trainer_config_helpers`
.
*
Add dataset package
-
CIFAR, MNIST, IMDB, WMT14, CONLL05, movielens, imikolov.
*
Add Priorbox layer for Single Shot Multibox Detection.
*
Add smooth L1 cost.
*
Add data reader creator and data reader decorator for v2 API.
*
Add the cpu implementation of cmrnorm-projection.
*
Add the cpu implementation of cmrnorm-projection.
## Improvements
## Improvements
...
@@ -19,6 +24,13 @@
...
@@ -19,6 +24,13 @@
*
Reorganize the catalog of doc/ and refine several docs.
*
Reorganize the catalog of doc/ and refine several docs.
*
Add Travis-CI for checking dead links.
*
Add Travis-CI for checking dead links.
*
Add a example for explaining sparse_vector.
*
Add a example for explaining sparse_vector.
*
Add Relu in layer_math.py
*
Simplify data processing flow for quick start.
*
Support CUDNN Deconv.
*
Add data feeder for v2 API.
*
Support predicting the samples from sys.stdin for sentiment demo.
*
Provide multi-proccess interface for image preprocessing.
*
Add benchmark document for v1 API.
*
Add Relu in layer_math.py.
*
Add Relu in layer_math.py.
*
Add packages for automatically downloading public datasets.
*
Add packages for automatically downloading public datasets.
*
Rename Argument::sumCost to Argument::sum since Argument does not have to have any relationship with cost.
*
Rename Argument::sumCost to Argument::sum since Argument does not have to have any relationship with cost.
...
@@ -49,6 +61,9 @@
...
@@ -49,6 +61,9 @@
*
Fix LogActivation which is not defined.
*
Fix LogActivation which is not defined.
*
Fix bug when run test_layerHelpers multiple times.
*
Fix bug when run test_layerHelpers multiple times.
*
Fix protobuf size limit on seq2seq demo.
*
Fix protobuf size limit on seq2seq demo.
*
Fix bug for dataprovider converter in GPU mode.
*
Fix bug in GatedRecurrentLayer which only occurs in predicting or
`job=test`
mode.
*
Fix bug for BatchNorm when testing more than models in test mode.
*
Fix unit test of paramRelu.
*
Fix unit test of paramRelu.
*
Fix some warning about CpuSparseMatrix.
*
Fix some warning about CpuSparseMatrix.
*
Fix MultiGradientMachine error if trainer_count > batch_size.
*
Fix MultiGradientMachine error if trainer_count > batch_size.
...
...
demo/seqToseq/seqToseq_net.py
浏览文件 @
a0fbc1e1
...
@@ -69,7 +69,8 @@ def gru_encoder_decoder(data_conf,
...
@@ -69,7 +69,8 @@ def gru_encoder_decoder(data_conf,
encoder_size
=
512
,
encoder_size
=
512
,
decoder_size
=
512
,
decoder_size
=
512
,
beam_size
=
3
,
beam_size
=
3
,
max_length
=
250
):
max_length
=
250
,
error_clipping
=
50
):
"""
"""
A wrapper for an attention version of GRU Encoder-Decoder network
A wrapper for an attention version of GRU Encoder-Decoder network
is_generating: whether this config is used for generating
is_generating: whether this config is used for generating
...
@@ -90,9 +91,19 @@ def gru_encoder_decoder(data_conf,
...
@@ -90,9 +91,19 @@ def gru_encoder_decoder(data_conf,
input
=
src_word_id
,
input
=
src_word_id
,
size
=
word_vector_dim
,
size
=
word_vector_dim
,
param_attr
=
ParamAttr
(
name
=
'_source_language_embedding'
))
param_attr
=
ParamAttr
(
name
=
'_source_language_embedding'
))
src_forward
=
simple_gru
(
input
=
src_embedding
,
size
=
encoder_size
)
src_forward
=
simple_gru
(
input
=
src_embedding
,
size
=
encoder_size
,
naive
=
True
,
gru_layer_attr
=
ExtraLayerAttribute
(
error_clipping_threshold
=
error_clipping
))
src_backward
=
simple_gru
(
src_backward
=
simple_gru
(
input
=
src_embedding
,
size
=
encoder_size
,
reverse
=
True
)
input
=
src_embedding
,
size
=
encoder_size
,
reverse
=
True
,
naive
=
True
,
gru_layer_attr
=
ExtraLayerAttribute
(
error_clipping_threshold
=
error_clipping
))
encoded_vector
=
concat_layer
(
input
=
[
src_forward
,
src_backward
])
encoded_vector
=
concat_layer
(
input
=
[
src_forward
,
src_backward
])
with
mixed_layer
(
size
=
decoder_size
)
as
encoded_proj
:
with
mixed_layer
(
size
=
decoder_size
)
as
encoded_proj
:
...
@@ -117,11 +128,13 @@ def gru_encoder_decoder(data_conf,
...
@@ -117,11 +128,13 @@ def gru_encoder_decoder(data_conf,
decoder_inputs
+=
full_matrix_projection
(
input
=
context
)
decoder_inputs
+=
full_matrix_projection
(
input
=
context
)
decoder_inputs
+=
full_matrix_projection
(
input
=
current_word
)
decoder_inputs
+=
full_matrix_projection
(
input
=
current_word
)
gru_step
=
gru_step_layer
(
gru_step
=
gru_step_
naive_
layer
(
name
=
'gru_decoder'
,
name
=
'gru_decoder'
,
input
=
decoder_inputs
,
input
=
decoder_inputs
,
output_mem
=
decoder_mem
,
output_mem
=
decoder_mem
,
size
=
decoder_size
)
size
=
decoder_size
,
layer_attr
=
ExtraLayerAttribute
(
error_clipping_threshold
=
error_clipping
))
with
mixed_layer
(
with
mixed_layer
(
size
=
target_dict_dim
,
bias_attr
=
True
,
size
=
target_dict_dim
,
bias_attr
=
True
,
...
...
doc/getstarted/index_cn.rst
浏览文件 @
a0fbc1e1
...
@@ -2,7 +2,8 @@
...
@@ -2,7 +2,8 @@
============
============
.. toctree::
.. toctree::
:maxdepth:
2
:maxdepth:
1
build_and_install/index_cn.rst
build_and_install/index_cn.rst
basic_usage/index_cn.rst
- `深度学习入门课程 <http://book.paddlepaddle.org/>`_
doc/getstarted/index_en.rst
浏览文件 @
a0fbc1e1
...
@@ -2,7 +2,8 @@ GET STARTED
...
@@ -2,7 +2,8 @@ GET STARTED
============
============
.. toctree::
.. toctree::
:maxdepth:
2
:maxdepth:
1
build_and_install/index_en.rst
build_and_install/index_en.rst
basic_usage/index_en.rst
- `Deep Learning 101 <http://book.paddlepaddle.org/index.en.html>`_
doc/howto/deep_model/rnn/hierarchical_layer_cn.rst
浏览文件 @
a0fbc1e1
...
@@ -19,18 +19,18 @@
...
@@ -19,18 +19,18 @@
在 PaddlePaddle中,下面这些Layer能够接受双层序列作为输入,完成相应的计算。
在 PaddlePaddle中,下面这些Layer能够接受双层序列作为输入,完成相应的计算。
pooling
_layer
pooling
========
======
========
pooling
_layer 的使用示例如下,详细见 :ref:`api_trainer_config_helpers_layers_pooling_layer
` 配置API。
pooling
的使用示例如下,详细见 :ref:`api_v2.layer_pooling
` 配置API。
.. code-block:: bash
.. code-block:: bash
seq_pool = pooling
_layer
(input=layer,
seq_pool = pooling(input=layer,
pooling_type=AvgPooling
(),
pooling_type=pooling.Max
(),
agg_level=AggregateLevel.EACH_SEQUENCE)
agg_level=AggregateLevel.EACH_SEQUENCE)
- `pooling_type` 目前支持两种,分别是:
MaxPooling()和AvgPoolin
g()。
- `pooling_type` 目前支持两种,分别是:
pooling.Max()和pooling.Av
g()。
- `agg_level=AggregateLevel.EACH_TIMESTEP` 时(默认值):
- `agg_level=AggregateLevel.EACH_TIMESTEP` 时(默认值):
...
@@ -47,7 +47,7 @@ pooling_layer 的使用示例如下,详细见 :ref:`api_trainer_config_helpers
...
@@ -47,7 +47,7 @@ pooling_layer 的使用示例如下,详细见 :ref:`api_trainer_config_helpers
last_seq 和 first_seq
last_seq 和 first_seq
=====================
=====================
last_seq 的使用示例如下( :ref:`api_
trainer_config_helpers_layers_first_seq` 类似),详细见 :ref:`api_trainer_config_helpers_layers
_last_seq` 配置API。
last_seq 的使用示例如下( :ref:`api_
v2.layer_first_seq` 类似),详细见 :ref:`api_v2.layer
_last_seq` 配置API。
.. code-block:: bash
.. code-block:: bash
...
@@ -65,16 +65,16 @@ last_seq 的使用示例如下( :ref:`api_trainer_config_helpers_layers_first_
...
@@ -65,16 +65,16 @@ last_seq 的使用示例如下( :ref:`api_trainer_config_helpers_layers_first_
- 输入:必须是一个双层序列
- 输入:必须是一个双层序列
- 输出:一个单层序列,其中每个元素是双层序列中每个subseq最后一个(或第一个)元素。
- 输出:一个单层序列,其中每个元素是双层序列中每个subseq最后一个(或第一个)元素。
expand
_layer
expand
======
======
======
expand
_layer 的使用示例如下,详细见 :ref:`api_trainer_config_helpers_layers_expand_layer
` 配置API。
expand
的使用示例如下,详细见 :ref:`api_v2.layer_expand
` 配置API。
.. code-block:: bash
.. code-block:: bash
ex
pand = expand_layer
(input=layer1,
ex
= expand
(input=layer1,
expand_as=layer2,
expand_as=layer2,
expand_level=ExpandLevel.FROM_TIMESTEP)
expand_level=ExpandLevel.FROM_TIMESTEP)
- `expand_level=ExpandLevel.FROM_TIMESTEP` 时(默认值):
- `expand_level=ExpandLevel.FROM_TIMESTEP` 时(默认值):
...
...
doc/howto/deep_model/rnn/index_cn.rst
浏览文件 @
a0fbc1e1
...
@@ -4,7 +4,6 @@ RNN相关模型
...
@@ -4,7 +4,6 @@ RNN相关模型
.. toctree::
.. toctree::
:maxdepth: 1
:maxdepth: 1
rnn_config_cn.rst
recurrent_group_cn.md
recurrent_group_cn.md
hierarchical_layer_cn.rst
hierarchical_layer_cn.rst
hrnn_rnn_api_compare_cn.rst
hrnn_rnn_api_compare_cn.rst
doc/howto/deep_model/rnn/index_en.rst
浏览文件 @
a0fbc1e1
RNN Models
RNN Models
==========
==========
.. toctree::
:maxdepth: 1
rnn_config_en.rst
doc/howto/dev/contribute_to_paddle_cn.md
浏览文件 @
a0fbc1e1
# 如何贡献代码
# 如何贡献代码
我们真诚地感谢您的贡献,欢迎通过 GitHub 的 fork 和 pull request 流程来提交代码。
我们真诚地感谢您的贡献,欢迎通过 GitHub 的 fork 和 pull request 流程来提交代码。
## 代码要求
## 代码要求
-
你的代码必须完全遵守
[
d
oxygen
](
http://www.stack.nl/~dimitri/doxygen/
)
的样式。
-
代码注释请遵守
[
D
oxygen
](
http://www.stack.nl/~dimitri/doxygen/
)
的样式。
-
确保编译器选项
WITH
\_
STYLE
\_
CHECK
已打开,并且编译能通过代码样式检查。
-
确保编译器选项
`WITH_STYLE_CHECK`
已打开,并且编译能通过代码样式检查。
-
所有代码必须具有单元测试。
-
所有代码必须具有单元测试。
-
通过所有单元测试。
-
通过所有单元测试。
以下教程将指导您提交代码。
以下教程将指导您提交代码。
## [Fork](https://help.github.com/articles/fork-a-repo/)
## [Fork](https://help.github.com/articles/fork-a-repo/)
跳转到
[
PaddlePaddle
](
https://github.com/PaddlePaddle/Paddle
)
GitHub首页,然后单击
`Fork`
按钮。
跳转到
[
PaddlePaddle
](
https://github.com/PaddlePaddle/Paddle
)
GitHub首页,然后单击
`Fork`
按钮
,生成自己目录下的仓库,比如
<https://github.com/USERNAME/Paddle>
。
## 克隆(Clone)
## 克隆(Clone)
Paddle 目前使用
[
git流分支模型
](
http://nvie.com/posts/a-successful-git-branching-model/
)
进行开发,测试,发行和维护。
将远程仓库 clone 到本地:
**develop**
是主分支,其他用户分支是特征分支(feature branches)。
```
bash
➜ git clone https://github.com/USERNAME/Paddle
➜
cd
Paddle
```
## 创建本地分支
Paddle 目前使用
[
Git流分支模型
](
http://nvie.com/posts/a-successful-git-branching-model/
)
进行开发,测试,发行和维护,具体请参考
[
Paddle 分支规范
](
https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/releasing_process.md#paddle-分支规范
)
。
一旦你创建了一个fork,你可以使用你最喜欢的 git 客户端克隆你的仓库(repo)或只是直接在命令行输入:
所有的 feature 和 bug fix 的开发工作都应该在一个新的分支上完成,一般从
`develop`
分支上创建新分支。
```
shell
使用
`git checkout -b`
创建并切换到新分支。
# 克隆 fork 到本地
git clone
--branch
develop https://github.com/USERNAME/Paddle.git
```
bash
➜ git checkout
-b
my-cool-stuff
```
```
如果你的仓库不包含
**develop**
分支,你只需自己创建它。
```
shell
值得注意的是,在 checkout 之前,需要保持当前分支目录 clean,否则会把 untracked 的文件也带到新分支上,这可以通过
`git status`
查看。
git clone https://github.com/USERNAME/Paddle.git Paddle
cd
Paddle
## 使用 `pre-commit` 钩子
git checkout
-b
develop
# 创建 develop 分支
git remote add upstream https://github.com/PaddlePaddle/Paddle.git
# 添加 upstream 到 baidu/Paddle
Paddle 开发人员使用
[
pre-commit
](
http://pre-commit.com/
)
工具来管理 Git 预提交钩子。 它可以帮助我们格式化源代码(C++,Python),在提交(commit)前自动检查一些基本事宜(如每个文件只有一个 EOL,Git 中不要添加大文件等)。
git pull upstream develop
# 更新 upstream
`pre-commit`
测试是 Travis-CI 中单元测试的一部分,不满足钩子的 PR 不能被提交到 Paddle,首先安装并在当前目录运行它:
```
bash
➜ pip
install
pre-commit
➜ pre-commit
install
```
```
然后你可以通过做一个本地开发分支开始开发
Paddle 使用
`clang-format`
来调整 C/C++ 源代码格式,请确保
`clang-format`
版本在 3.8 以上。
```
shell
## 开始开发
git checkout
-b
MY_COOL_STUFF_BRANCH
在本例中,我删除了 README.md 中的一行,并创建了一个新文件。
通过
`git status`
查看当前状态,这会提示当前目录的一些变化,同时也可以通过
`git diff`
查看文件具体被修改的内容。
```
bash
➜ git status
On branch
test
Changes not staged
for
commit:
(
use
"git add <file>..."
to update what will be committed
)
(
use
"git checkout -- <file>..."
to discard changes
in
working directory
)
modified: README.md
Untracked files:
(
use
"git add <file>..."
to include
in
what will be committed
)
test
no changes added to commit
(
use
"git add"
and/or
"git commit -a"
)
```
```
## 使用 `pre-commit` 钩子
## 构建和测试
编译 PaddlePaddle 的源码以及生成文档需要多种开发工具。为了方便大家,我们的标准开发流程是把这些工具都装进一个Docker image,称为
*开发镜像*
,通常名字是
`paddle:dev`
。然后所有用
`cmake && make`
的地方(比如IDE配置里)都用
`docker run paddle:dev`
来代替。
Paddle 开发人员使用
[
pre-commit
](
http://pre-commit.com/
)
工具来管理git预提交钩子。 它可以帮助我们格式化源代码(cpp,python),在提交前检查一些基本事宜(每个文件只有一个 EOL
如要build这个开发镜像,在源码目录树的根目录中运行:
,git 中不要添加大文件)。
`pre-commit`
测试是 Travis-CI 中单元测试的一部分,不满足钩子
的 PR 不能提交代码到 Paddle。
你可以通过
`pip install pre-commit`
安装
[
pre-commit
](
http://pre-commit.com/
)
,
```
bash
目前 Paddle 使用
`clang-format`
来调整C/C++源代码格式。请确保 clang-format 版本在3.8以上。
➜ docker build
-t
paddle:dev .
```
然后只需在 Paddle clone 目录中运行
`pre-commit install`
。当你
随后可以用这个开发镜像开build PaddlePaddle的源码。比如如果要build一个不依赖GPU,但是支持AVX指令集,并且包括unit tests的PaddlePaddle,可以:
提交你的代码时,pre-commit 钩子会检查本地代码是否存在
不适合提交的东西,等等。
## 提交(Commit)
```
bash
➜ docker run
-v
$(
pwd
)
:/paddle
-e
"WITH_GPU=OFF"
-e
"WITH_AVX=ON"
-e
"WITH_TEST=ON"
paddle:dev
```
提交你的代码
:
这个过程除了编译PaddlePaddle为
`./build/libpaddle.so`
,并且输出一个
`./build/paddle.deb`
文件之外,还会输出一个
`build/Dockerfile`
。我们只需要运行下面命令把编译好的PaddlePaddle打包成一个
*生产镜像*
(
`paddle:prod`
)
:
```
shell
```
bash
# 显示工作树状态
➜ docker build
-t
paddle:prod
-f
build/Dockerfile .
git status
# 添加修改过的文件
git add xx
env
EDITOR
=
vim git commit
# 你可以用 vim/nano/emacs 写下你的注释
```
```
提交信息的第一行是标题,其他行可以添加一些细节(如果有必要的话)。
## 保持 Fork 状态最新
如果要运行所有的单元测试,可以用如下命令:
在拉(pull)你的请求(request)之前,你应该从最新的 PaddlePaddle 同步代码。
```
bash
为此,你需要首先添加远程(remote):
➜ docker run
-it
-v
$(
pwd
)
:/paddle paddle:dev bash
-c
"cd /paddle/build && ctest"
```
```
shell
关于构建和测试的更多信息,请参见
[
这篇文档
](
https://github.com/PaddlePaddle/Paddle/blob/develop/doc/getstarted/build_and_install/docker_install_cn.rst
)
。
# 观察当前远程仓库配置
git remote
-v
## 提交(commit)
# 添加上游(upstream)仓库
git remote add upstream https://github.com/PaddlePaddle/Paddle.git
接下来我们取消对 README.md 文件的改变,然后提交新添加的 test 文件。
# 验证新的 upstream
git remote
-v
```
bash
➜ git checkout
--
README.md
➜ git status
On branch
test
Untracked files:
(
use
"git add <file>..."
to include
in
what will be committed
)
test
nothing added to commit but untracked files present
(
use
"git add"
to track
)
➜ git add
test
```
Git 每次提交代码,都需要写提交说明,这可以让其他人知道这次提交做了哪些改变,这可以通过
`git commit`
完成。
```
bash
➜ git commit
CRLF end-lines remover...............................
(
no files to check
)
Skipped
yapf.................................................
(
no files to check
)
Skipped
Check
for
added large files..............................................Passed
Check
for
merge conflicts................................................Passed
Check
for
broken symlinks................................................Passed
Detect Private Key...................................
(
no files to check
)
Skipped
Fix End of Files.....................................
(
no files to check
)
Skipped
clang-formater.......................................
(
no files to check
)
Skipped
[
my-cool-stuff c703c041] add
test
file
1 file changed, 0 insertions
(
+
)
, 0 deletions
(
-
)
create mode 100644 233
```
## 保持本地仓库最新
在准备发起 Pull Request 之前,需要同步原仓库(
<https://github.com/PaddlePaddle/Paddle>
)最新的代码。
首先通过
`git remote`
查看当前远程仓库的名字。
```
bash
➜ git remote
origin
➜ git remote
-v
origin https://github.com/USERNAME/Paddle
(
fetch
)
origin https://github.com/USERNAME/Paddle
(
push
)
```
```
用最新的 upstream 更新你的 fork:
这里 origin 是我们 clone 的远程仓库的名字,也就是自己用户名下的 Paddle,接下来我们创建一个原始 Paddle 仓库的远程主机,命名为 upstream。
```
shell
```
bash
git pull
--rebase
upstream develop
➜ git remote add upstream https://github.com/PaddlePaddle/Paddle
➜ git remote
origin
upstream
```
```
如果本地没有提交,git 将简单地执行快进。但是,如果你一直在做一些改变(绝大多数情况下不应该),你可能要处理冲突。
现在,你的本地主分支与上游修改的一致并是最新的
。
获取 upstream 的最新代码并更新当前分支
。
## 推送(Push)到 GitHub
```
bash
➜ git fetch upstream
➜ git pull upstream develop
```
## Push 到远程仓库
将本地的修改推送到 GitHub 上,也就是 https://github.com/USERNAME/Paddle。
```
shell
```
bash
#
在 GitHub 上 push 你的仓库
#
推送到远程仓库 origin 的 my-cool-stuff 分支上
git push
-u
origin MY_COOL_STUFF_BRANCH
# 创建远程分支 MY_COOL_STUFF_BRANCH 到 origin.
➜ git push origin my-cool-stuff
```
```
## 拉取请求(Pull Request)
## 建立 Issue 并完成 Pull Request
建立一个 Issue 描述问题,并记录它的编号。
切换到所建分支,然后点击
`New pull request`
。
<img
width=
"295"
alt=
"screen shot 2017-04-26 at 9 09 28 pm"
src=
"https://cloud.githubusercontent.com/assets/11692045/25436054/a6d98c66-2ac4-11e7-9cb1-18dd13150230.png"
>
转到 GitHub上 你 fork 的页面,选择你的开发分支并单击
**pull request 按钮**
。
选择目标分支:
## 使用最新版本更新你的 pull 请求
<img
width=
"750"
alt=
"screen shot 2017-04-26 at 9 11 52 pm"
src=
"https://cloud.githubusercontent.com/assets/11692045/25436139/f83b1e6c-2ac4-11e7-8c0e-add499023c46.png"
>
在
代码审查(code review)期间,由于 baidu/Paddle 中新的提交导致你的 pull 请求可能会失效。如果没有冲突,GitHub允许自动更新。 你可以点击 pull request 页面中的“更新分支(Update Branch)”按钮。 但是如果存在代码冲突,你需要手动进行更新。你需要在本地仓库执行如下命令:
在
PR 的描述说明中,填写
`resolve #Issue编号`
可以在这个 PR 被 merge 后,自动关闭对应的 Issue,具体请见
<https://help.github.com/articles/closing-issues-via-commit-messages/>
。
```
shell
接下来等待 review,如果有需要修改的地方,参照上述步骤更新 origin 中的对应分支即可。
git checkout MY_COOL_STUFF_BRANCH
git pull upstream develop
## 删除远程分支
# 你可能需要根据git提示解决冲突
# 创建并测试你的代码
在 PR 被 merge 进主仓库后,我们可以在 PR 的页面删除远程仓库的分支。
git push origin MY_COOL_STUFF_BRANCH
<img
width=
"775"
alt=
"screen shot 2017-04-26 at 9 18 24 pm"
src=
"https://cloud.githubusercontent.com/assets/11692045/25436457/e4cdd472-2ac5-11e7-9272-badc76c4a23e.png"
>
也可以使用
`git push origin :分支名`
删除远程分支,如:
```
bash
➜ git push origin :my-cool-stuff
```
```
现在你的 Pull Request 是最新的了。
##
修改你的 pull request
##
删除本地分支
当根据审阅者的意见修改 pull 请求时,请使用“git commit”而不是“git commit --amend”来提交更改,以便审阅者可以看到新的请求和旧的请求之间的区别
。
最后,删除本地分支
。
可能的命令是
```
bash
# 切换到 develop 分支
➜ git checkout develop
```
shell
# 删除 my-cool-stuff 分支
git checkout MY_COOL_STUFF_BRANCH
➜ git branch
-D
my-cool-stuff
git pull upstream develop
# 将本地更新到最新的代码库
# 可能会发生一些冲突
# 开始开发吧!
env
EDITOR
=
vim git commit
# 添加修改日志
git push origin MY_COOL_STUFF_BRANCH
```
```
至此,我们就完成了一次代码贡献的过程。
doc/howto/usage/k8s/k8s_basis_cn.md
浏览文件 @
a0fbc1e1
...
@@ -14,7 +14,7 @@
...
@@ -14,7 +14,7 @@
-
[
*PersistentVolume*
](
https://kubernetes.io/docs/user-guide/persistent-volumes/
)
: 和
[
*PersistentVolumeClaim*
](
https://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims
)
结合,将外部的存储服务在Kubernetes中描述成为统一的资源形式,便于存储资源管理和Pod引用。
-
[
*PersistentVolume*
](
https://kubernetes.io/docs/user-guide/persistent-volumes/
)
: 和
[
*PersistentVolumeClaim*
](
https://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims
)
结合,将外部的存储服务在Kubernetes中描述成为统一的资源形式,便于存储资源管理和Pod引用。
# 部署Kubernetes集群
#
#
部署Kubernetes集群
Kubernetes提供了多种集群部署的方案,本文档内不重复介绍。这里给出集中常见的部署方法:
Kubernetes提供了多种集群部署的方案,本文档内不重复介绍。这里给出集中常见的部署方法:
...
@@ -25,7 +25,7 @@ Kubernetes提供了多种集群部署的方案,本文档内不重复介绍。
...
@@ -25,7 +25,7 @@ Kubernetes提供了多种集群部署的方案,本文档内不重复介绍。
可以参考
[
这个表格
](
https://kubernetes.io/docs/getting-started-guides/#table-of-solutions
)
选择适合您的场景的合适方案。
可以参考
[
这个表格
](
https://kubernetes.io/docs/getting-started-guides/#table-of-solutions
)
选择适合您的场景的合适方案。
# 选择存储方案
#
#
选择存储方案
容器不会保留在运行时生成的数据,job或者应用程序在容器中运行时生成的数据会在容器销毁时消失。为了完成分布式机器学习训练任务,需要有一个外部的存储服务来保存训练所需数据和训练输出。
容器不会保留在运行时生成的数据,job或者应用程序在容器中运行时生成的数据会在容器销毁时消失。为了完成分布式机器学习训练任务,需要有一个外部的存储服务来保存训练所需数据和训练输出。
常见的可选存储服务包括:
常见的可选存储服务包括:
...
@@ -35,9 +35,9 @@ Kubernetes提供了多种集群部署的方案,本文档内不重复介绍。
...
@@ -35,9 +35,9 @@ Kubernetes提供了多种集群部署的方案,本文档内不重复介绍。
-
[
*Ceph*
](
http://docs.ceph.com/docs/master/
)
: 分布式文件系统,支持rbd,POSIX API接口(ceph fs)和对象存储API,参考
[
这里
](
https://kubernetes.io/docs/user-guide/volumes/#rbd
)
。
-
[
*Ceph*
](
http://docs.ceph.com/docs/master/
)
: 分布式文件系统,支持rbd,POSIX API接口(ceph fs)和对象存储API,参考
[
这里
](
https://kubernetes.io/docs/user-guide/volumes/#rbd
)
。
-
[
*MooseFS*
](
https://moosefs.com/documentation.html
)
: 一个分布式的存储系统。需要先挂载到服务器Node上再通过kubernetes hostPath Volume挂载到容器中。
-
[
*MooseFS*
](
https://moosefs.com/documentation.html
)
: 一个分布式的存储系统。需要先挂载到服务器Node上再通过kubernetes hostPath Volume挂载到容器中。
# 配置kubectl
#
#
配置kubectl
## 安装kubectl
##
#
安装kubectl
```
```
# OS X
# OS X
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl
...
@@ -49,7 +49,7 @@ curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s htt
...
@@ -49,7 +49,7 @@ curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s htt
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/windows/amd64/kubectl.exe
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/windows/amd64/kubectl.exe
```
```
## 配置kubectl访问你的kubernetes集群
##
#
配置kubectl访问你的kubernetes集群
编辑
`~/.kube/config`
这个配置文件,修改
`Master-IP`
的地址。如果使用SSL认证,则需要配置
`certificate-authority`
和
`users`
中的用户证书。如果是使用非SSL方式访问(比如通过8080端口),也可以去掉这些证书的配置。
编辑
`~/.kube/config`
这个配置文件,修改
`Master-IP`
的地址。如果使用SSL认证,则需要配置
`certificate-authority`
和
`users`
中的用户证书。如果是使用非SSL方式访问(比如通过8080端口),也可以去掉这些证书的配置。
```
```
...
...
doc/index_cn.rst
浏览文件 @
a0fbc1e1
...
@@ -5,7 +5,6 @@ PaddlePaddle 文档
...
@@ -5,7 +5,6 @@ PaddlePaddle 文档
:maxdepth: 1
:maxdepth: 1
getstarted/index_cn.rst
getstarted/index_cn.rst
tutorials/index_cn.md
howto/index_cn.rst
howto/index_cn.rst
api/index_cn.rst
api/index_cn.rst
faq/index_cn.rst
faq/index_cn.rst
doc/index_en.rst
浏览文件 @
a0fbc1e1
...
@@ -5,8 +5,6 @@ PaddlePaddle Documentation
...
@@ -5,8 +5,6 @@ PaddlePaddle Documentation
:maxdepth: 1
:maxdepth: 1
getstarted/index_en.rst
getstarted/index_en.rst
tutorials/index_en.md
howto/index_en.rst
howto/index_en.rst
api/index_en.rst
api/index_en.rst
about/index_en.rst
about/index_en.rst
\ No newline at end of file
doc_theme/templates/layout.html
浏览文件 @
a0fbc1e1
...
@@ -114,10 +114,7 @@
...
@@ -114,10 +114,7 @@
</ul>
</ul>
</div>
</div>
<ul
class=
"site-page-links"
>
<ul
class=
"site-page-links"
>
<li><a>
Home
</a></li>
<li><a
href=
"/"
>
Home
</a></li>
<li><a>
Get Started
</a></li>
<li
class=
"active"
><a>
Documentation
</a></li>
<li><a>
About Us
</a></li>
</ul>
</ul>
</div>
</div>
<div
class=
"doc-module"
>
<div
class=
"doc-module"
>
...
@@ -137,7 +134,7 @@
...
@@ -137,7 +134,7 @@
{{ toctree }}
{{ toctree }}
{% endblock %}
{% endblock %}
</nav>
</nav>
{% if
toc
%}
{% if
False
%}
<nav
class=
"local-toc"
>
{{ toc }}
</nav>
<nav
class=
"local-toc"
>
{{ toc }}
</nav>
{% endif %}
{% endif %}
<section
class=
"doc-content-wrap"
>
<section
class=
"doc-content-wrap"
>
...
@@ -168,7 +165,8 @@
...
@@ -168,7 +165,8 @@
VERSION
:
'
{{ release|e }}
'
,
VERSION
:
'
{{ release|e }}
'
,
COLLAPSE_INDEX
:
false
,
COLLAPSE_INDEX
:
false
,
FILE_SUFFIX
:
'
{{
''
if no_search_suffix else file_suffix }}
'
,
FILE_SUFFIX
:
'
{{
''
if no_search_suffix else file_suffix }}
'
,
HAS_SOURCE
:
{{
has_source
|
lower
}}
HAS_SOURCE
:
{{
has_source
|
lower
}},
SOURCELINK_SUFFIX
:
"
.txt
"
,
};
};
</script>
</script>
{%- for scriptfile in script_files %}
{%- for scriptfile in script_files %}
...
...
paddle/gserver/tests/sequence_layer_group.conf
浏览文件 @
a0fbc1e1
...
@@ -48,8 +48,7 @@ lstm = lstmemory_group(
...
@@ -48,8 +48,7 @@ lstm = lstmemory_group(
size
=
hidden_dim
,
size
=
hidden_dim
,
act
=
TanhActivation
(),
act
=
TanhActivation
(),
gate_act
=
SigmoidActivation
(),
gate_act
=
SigmoidActivation
(),
state_act
=
TanhActivation
(),
state_act
=
TanhActivation
())
lstm_layer_attr
=
ExtraLayerAttribute
(
error_clipping_threshold
=
50
))
lstm_last
=
last_seq
(
input
=
lstm
)
lstm_last
=
last_seq
(
input
=
lstm
)
...
...
paddle/gserver/tests/sequence_nest_layer_group.conf
浏览文件 @
a0fbc1e1
...
@@ -51,8 +51,7 @@ def lstm_group(lstm_group_input):
...
@@ -51,8 +51,7 @@ def lstm_group(lstm_group_input):
size
=
hidden_dim
,
size
=
hidden_dim
,
act
=
TanhActivation
(),
act
=
TanhActivation
(),
gate_act
=
SigmoidActivation
(),
gate_act
=
SigmoidActivation
(),
state_act
=
TanhActivation
(),
state_act
=
TanhActivation
())
lstm_layer_attr
=
ExtraLayerAttribute
(
error_clipping_threshold
=
50
))
return
lstm_output
return
lstm_output
...
...
paddle/scripts/travis/docs.sh
浏览文件 @
a0fbc1e1
...
@@ -60,6 +60,7 @@ function deploy_docs() {
...
@@ -60,6 +60,7 @@ function deploy_docs() {
deploy_docs
"master"
"."
deploy_docs
"master"
"."
deploy_docs
"develop"
"./develop/"
deploy_docs
"develop"
"./develop/"
deploy_docs
"release/0.10.0"
"./release/0.10.0/"
# Check is there anything changed.
# Check is there anything changed.
set
+e
set
+e
...
...
paddle/trainer/tests/CMakeLists.txt
浏览文件 @
a0fbc1e1
...
@@ -17,14 +17,17 @@ add_test(NAME test_Trainer
...
@@ -17,14 +17,17 @@ add_test(NAME test_Trainer
WORKING_DIRECTORY
${
PROJ_ROOT
}
/paddle/
)
WORKING_DIRECTORY
${
PROJ_ROOT
}
/paddle/
)
############### test_TrainerOnePass ##########################
############### test_TrainerOnePass ##########################
add_unittest_without_exec
(
test_TrainerOnePass
if
(
WITH_PYTHON
)
test_TrainerOnePass.cpp
)
# only run test_TrainerOnePass when PYTHON is enabled, because train one pass
add_test
(
NAME test_TrainerOnePass
# is using PyDataProvider2.
COMMAND
${
PROJ_ROOT
}
/paddle/.set_python_path.sh -d
add_unittest_without_exec
(
test_TrainerOnePass
${
PROJ_ROOT
}
/python/:
${
PROJ_ROOT
}
/paddle/trainer/tests
test_TrainerOnePass.cpp
)
${
PROJ_ROOT
}
/paddle/.set_port.sh -p port
${
CMAKE_CURRENT_BINARY_DIR
}
/test_TrainerOnePass
add_test
(
NAME test_TrainerOnePass
WORKING_DIRECTORY
${
PROJ_ROOT
}
/paddle/
)
COMMAND
${
PROJ_ROOT
}
/paddle/.set_python_path.sh -d
${
PROJ_ROOT
}
/python/:
${
PROJ_ROOT
}
/paddle/trainer/tests
${
PROJ_ROOT
}
/paddle/.set_port.sh -p port
${
CMAKE_CURRENT_BINARY_DIR
}
/test_TrainerOnePass
WORKING_DIRECTORY
${
PROJ_ROOT
}
/paddle/
)
endif
()
################ test_CompareTwoNets ######################
################ test_CompareTwoNets ######################
add_unittest_without_exec
(
test_CompareTwoNets
add_unittest_without_exec
(
test_CompareTwoNets
test_CompareTwoNets.cpp
)
test_CompareTwoNets.cpp
)
...
...
python/CMakeLists.txt
浏览文件 @
a0fbc1e1
...
@@ -24,9 +24,12 @@ add_custom_target(paddle_python ALL DEPENDS
...
@@ -24,9 +24,12 @@ add_custom_target(paddle_python ALL DEPENDS
${
OUTPUT_DIR
}
/.timestamp
)
${
OUTPUT_DIR
}
/.timestamp
)
add_subdirectory
(
paddle/trainer_config_helpers/tests
)
add_subdirectory
(
paddle/trainer_config_helpers/tests
)
add_subdirectory
(
paddle/v2/tests
)
if
(
WITH_SWIG_PY
)
add_subdirectory
(
paddle/v2/reader/tests
)
# enable v2 API unittest only when paddle swig api is compiled
add_subdirectory
(
paddle/v2/plot/tests
)
add_subdirectory
(
paddle/v2/tests
)
add_subdirectory
(
paddle/v2/reader/tests
)
add_subdirectory
(
paddle/v2/plot/tests
)
endif
()
install
(
DIRECTORY
${
CMAKE_CURRENT_BINARY_DIR
}
/dist/
install
(
DIRECTORY
${
CMAKE_CURRENT_BINARY_DIR
}
/dist/
DESTINATION opt/paddle/share/wheels
DESTINATION opt/paddle/share/wheels
...
...
python/paddle/trainer_config_helpers/attrs.py
浏览文件 @
a0fbc1e1
...
@@ -208,12 +208,15 @@ class ExtraLayerAttribute(object):
...
@@ -208,12 +208,15 @@ class ExtraLayerAttribute(object):
drop_rate
=
None
,
drop_rate
=
None
,
device
=
None
):
device
=
None
):
self
.
attr
=
dict
()
self
.
attr
=
dict
()
if
isinstance
(
error_clipping_threshold
,
float
):
if
error_clipping_threshold
is
not
None
:
assert
error_clipping_threshold
>
0
error_clipping_threshold
=
float
(
error_clipping_threshold
)
self
.
attr
[
"error_clipping_threshold"
]
=
error_clipping_threshold
if
error_clipping_threshold
<
0
:
raise
ValueError
(
"Error clipping must > 0"
)
if
isinstance
(
drop_rate
,
float
):
self
.
attr
[
'error_clipping_threshold'
]
=
error_clipping_threshold
assert
drop_rate
>
0
if
drop_rate
is
not
None
:
drop_rate
=
float
(
drop_rate
)
if
drop_rate
<
0
:
raise
ValueError
(
"Dropout rate must > 0"
)
self
.
attr
[
"drop_rate"
]
=
drop_rate
self
.
attr
[
"drop_rate"
]
=
drop_rate
if
isinstance
(
device
,
int
):
if
isinstance
(
device
,
int
):
...
...
python/paddle/trainer_config_helpers/layers.py
浏览文件 @
a0fbc1e1
...
@@ -84,6 +84,7 @@ __all__ = [
...
@@ -84,6 +84,7 @@ __all__ = [
'GeneratedInput'
,
'GeneratedInput'
,
'SubsequenceInput'
,
'SubsequenceInput'
,
'gru_step_layer'
,
'gru_step_layer'
,
'gru_step_naive_layer'
,
'recurrent_layer'
,
'recurrent_layer'
,
'BaseGeneratedInput'
,
'BaseGeneratedInput'
,
'conv_operator'
,
'conv_operator'
,
...
@@ -2284,7 +2285,7 @@ def img_pool_layer(input,
...
@@ -2284,7 +2285,7 @@ def img_pool_layer(input,
type_name
=
pool_type
.
name
+
'-projection'
\
type_name
=
pool_type
.
name
+
'-projection'
\
if
(
if
(
isinstance
(
pool_type
,
AvgPooling
)
or
isinstance
(
pool_type
,
MaxPooling
))
\
isinstance
(
pool_type
,
AvgPooling
)
or
isinstance
(
pool_type
,
MaxPooling
))
\
else
pool_type
.
name
else
pool_type
.
name
pool_size_y
=
pool_size
if
pool_size_y
is
None
else
pool_size_y
pool_size_y
=
pool_size
if
pool_size_y
is
None
else
pool_size_y
...
@@ -3084,6 +3085,78 @@ def gru_step_layer(input,
...
@@ -3084,6 +3085,78 @@ def gru_step_layer(input,
activation
=
act
)
activation
=
act
)
@
wrap_bias_attr_default
()
@
wrap_param_attr_default
()
@
wrap_act_default
(
param_names
=
[
'gate_act'
],
act
=
SigmoidActivation
())
@
wrap_act_default
(
act
=
TanhActivation
())
@
wrap_name_default
(
'gru_step_naive'
)
@
layer_support
(
ERROR_CLIPPING
,
DROPOUT
)
def
gru_step_naive_layer
(
input
,
output_mem
,
size
=
None
,
name
=
None
,
act
=
None
,
gate_act
=
None
,
bias_attr
=
None
,
param_attr
=
None
,
layer_attr
=
None
):
"""
GRU Step Layer, but using MixedLayer to generate. It support ERROR_CLIPPING
and DROPOUT.
:param input:
:param output_mem:
:param size:
:param name:
:param act:
:param gate_act:
:param bias_attr:
:param param_attr:
:param layer_attr:
:return:
"""
if
input
.
size
%
3
!=
0
:
raise
ValueError
(
"GruStep input size must be divided by 3"
)
if
size
is
None
:
size
=
input
.
size
/
3
def
__gate__
(
gate_name
,
offset
):
with
mixed_layer
(
name
=
name
+
"_"
+
gate_name
,
size
=
size
,
layer_attr
=
layer_attr
,
bias_attr
=
bias_attr
,
act
=
gate_act
)
as
gate
:
gate
+=
identity_projection
(
input
=
input
,
offset
=
offset
)
gate
+=
full_matrix_projection
(
input
=
output_mem
,
param_attr
=
param_attr
)
return
gate
update_gate
=
__gate__
(
"update"
,
0
)
reset_gate
=
__gate__
(
"reset"
,
size
)
with
mixed_layer
(
name
=
name
+
"_reset_output"
,
bias_attr
=
False
)
as
reset_output
:
reset_output
+=
dotmul_operator
(
a
=
output_mem
,
b
=
reset_gate
)
with
mixed_layer
(
name
=
name
+
"_output_candidate"
,
size
=
size
,
layer_attr
=
layer_attr
,
bias_attr
=
bias_attr
,
act
=
act
)
as
output_candidate
:
output_candidate
+=
identity_projection
(
input
=
input
,
offset
=
2
*
size
)
output_candidate
+=
full_matrix_projection
(
input
=
reset_output
,
param_attr
=
param_attr
)
with
mixed_layer
(
name
=
name
)
as
output
:
output
+=
identity_projection
(
output_mem
)
output
+=
dotmul_operator
(
a
=
output_mem
,
b
=
update_gate
,
scale
=-
1.0
)
output
+=
dotmul_operator
(
a
=
output_candidate
,
b
=
update_gate
)
return
output
@
wrap_name_default
()
@
wrap_name_default
()
@
layer_support
()
@
layer_support
()
def
get_output_layer
(
input
,
arg_name
,
name
=
None
,
layer_attr
=
None
):
def
get_output_layer
(
input
,
arg_name
,
name
=
None
,
layer_attr
=
None
):
...
...
python/paddle/trainer_config_helpers/networks.py
浏览文件 @
a0fbc1e1
...
@@ -825,7 +825,8 @@ def gru_unit(input,
...
@@ -825,7 +825,8 @@ def gru_unit(input,
gru_param_attr
=
None
,
gru_param_attr
=
None
,
act
=
None
,
act
=
None
,
gate_act
=
None
,
gate_act
=
None
,
gru_layer_attr
=
None
):
gru_layer_attr
=
None
,
naive
=
False
):
"""
"""
Define calculations that a gated recurrent unit performs in a single time
Define calculations that a gated recurrent unit performs in a single time
step. This function itself is not a recurrent layer, so that it can not be
step. This function itself is not a recurrent layer, so that it can not be
...
@@ -857,7 +858,12 @@ def gru_unit(input,
...
@@ -857,7 +858,12 @@ def gru_unit(input,
out_mem
=
memory
(
name
=
name
,
size
=
size
)
out_mem
=
memory
(
name
=
name
,
size
=
size
)
gru_out
=
gru_step_layer
(
if
naive
:
__step__
=
gru_step_naive_layer
else
:
__step__
=
gru_step_layer
gru_out
=
__step__
(
name
=
name
,
name
=
name
,
input
=
input
,
input
=
input
,
output_mem
=
out_mem
,
output_mem
=
out_mem
,
...
@@ -879,7 +885,8 @@ def gru_group(input,
...
@@ -879,7 +885,8 @@ def gru_group(input,
gru_param_attr
=
None
,
gru_param_attr
=
None
,
act
=
None
,
act
=
None
,
gate_act
=
None
,
gate_act
=
None
,
gru_layer_attr
=
None
):
gru_layer_attr
=
None
,
naive
=
False
):
"""
"""
gru_group is a recurrent layer group version of Gated Recurrent Unit. It
gru_group is a recurrent layer group version of Gated Recurrent Unit. It
does exactly the same calculation as the grumemory layer does. A promising
does exactly the same calculation as the grumemory layer does. A promising
...
@@ -928,7 +935,8 @@ def gru_group(input,
...
@@ -928,7 +935,8 @@ def gru_group(input,
gru_param_attr
=
gru_param_attr
,
gru_param_attr
=
gru_param_attr
,
act
=
act
,
act
=
act
,
gate_act
=
gate_act
,
gate_act
=
gate_act
,
gru_layer_attr
=
gru_layer_attr
)
gru_layer_attr
=
gru_layer_attr
,
naive
=
naive
)
return
recurrent_group
(
return
recurrent_group
(
name
=
'%s_recurrent_group'
%
name
,
name
=
'%s_recurrent_group'
%
name
,
...
@@ -949,7 +957,8 @@ def simple_gru(input,
...
@@ -949,7 +957,8 @@ def simple_gru(input,
gru_param_attr
=
None
,
gru_param_attr
=
None
,
act
=
None
,
act
=
None
,
gate_act
=
None
,
gate_act
=
None
,
gru_layer_attr
=
None
):
gru_layer_attr
=
None
,
naive
=
False
):
"""
"""
You maybe see gru_step_layer, grumemory in layers.py, gru_unit, gru_group,
You maybe see gru_step_layer, grumemory in layers.py, gru_unit, gru_group,
simple_gru in network.py. The reason why there are so many interfaces is
simple_gru in network.py. The reason why there are so many interfaces is
...
@@ -1018,7 +1027,8 @@ def simple_gru(input,
...
@@ -1018,7 +1027,8 @@ def simple_gru(input,
gru_param_attr
=
gru_param_attr
,
gru_param_attr
=
gru_param_attr
,
act
=
act
,
act
=
act
,
gate_act
=
gate_act
,
gate_act
=
gate_act
,
gru_layer_attr
=
gru_layer_attr
)
gru_layer_attr
=
gru_layer_attr
,
naive
=
naive
)
@
wrap_name_default
(
'simple_gru2'
)
@
wrap_name_default
(
'simple_gru2'
)
...
...
python/paddle/trainer_config_helpers/tests/configs/protostr/projections.protostr
浏览文件 @
a0fbc1e1
...
@@ -320,6 +320,7 @@ layers {
...
@@ -320,6 +320,7 @@ layers {
}
}
}
}
drop_rate: 0.5
drop_rate: 0.5
error_clipping_threshold: 40.0
}
}
parameters {
parameters {
name: "___embedding_0__.w0"
name: "___embedding_0__.w0"
...
...
python/paddle/v2/layer.py
浏览文件 @
a0fbc1e1
...
@@ -356,6 +356,9 @@ def mixed(size=0,
...
@@ -356,6 +356,9 @@ def mixed(size=0,
return
MixedLayerV2
(
size
,
input
,
name
,
act
,
bias_attr
,
layer_attr
)
return
MixedLayerV2
(
size
,
input
,
name
,
act
,
bias_attr
,
layer_attr
)
mixed
.
__doc__
=
conf_helps
.
mixed_layer
.
__doc__
class
RecurrentLayerInput
(
Layer
):
class
RecurrentLayerInput
(
Layer
):
def
__init__
(
self
,
recurrent_name
,
index
,
parent_layers
):
def
__init__
(
self
,
recurrent_name
,
index
,
parent_layers
):
parents_len
=
len
(
parent_layers
)
parents_len
=
len
(
parent_layers
)
...
@@ -404,6 +407,8 @@ data.__name__ = 'data'
...
@@ -404,6 +407,8 @@ data.__name__ = 'data'
AggregateLevel
=
conf_helps
.
layers
.
AggregateLevel
AggregateLevel
=
conf_helps
.
layers
.
AggregateLevel
ExpandLevel
=
conf_helps
.
layers
.
ExpandLevel
ExpandLevel
=
conf_helps
.
layers
.
ExpandLevel
memory
=
MemoryV2
memory
=
MemoryV2
memory
.
__name__
=
'memory'
memory
.
__doc__
=
conf_helps
.
memory
.
__doc__
def
__layer_name_mapping__
(
inname
):
def
__layer_name_mapping__
(
inname
):
...
@@ -512,6 +517,9 @@ def recurrent_group(step, input, name=None):
...
@@ -512,6 +517,9 @@ def recurrent_group(step, input, name=None):
return
retv
return
retv
recurrent_group
.
__doc__
=
conf_helps
.
recurrent_group
.
__doc__
@
wrap_name_default
()
@
wrap_name_default
()
def
beam_search
(
step
,
def
beam_search
(
step
,
input
,
input
,
...
@@ -579,6 +587,8 @@ def beam_search(step,
...
@@ -579,6 +587,8 @@ def beam_search(step,
return
tmp
return
tmp
beam_search
.
__doc__
=
conf_helps
.
beam_search
.
__doc__
__projection_names__
=
filter
(
lambda
x
:
x
.
endswith
(
'_projection'
),
__projection_names__
=
filter
(
lambda
x
:
x
.
endswith
(
'_projection'
),
dir
(
conf_helps
))
dir
(
conf_helps
))
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录