)最新的代码。
+
+首先通过 `git remote` 查看当前远程仓库的名字。
+
+```bash
+➜ git remote
+origin
+➜ git remote -v
+origin https://github.com/USERNAME/Paddle (fetch)
+origin https://github.com/USERNAME/Paddle (push)
```
-用最新的 upstream 更新你的 fork:
+这里 origin 是我们 clone 的远程仓库的名字,也就是自己用户名下的 Paddle,接下来我们创建一个原始 Paddle 仓库的远程主机,命名为 upstream。
-```shell
-git pull --rebase upstream develop
+```bash
+➜ git remote add upstream https://github.com/PaddlePaddle/Paddle
+➜ git remote
+origin
+upstream
```
-如果本地没有提交,git 将简单地执行快进。但是,如果你一直在做一些改变(绝大多数情况下不应该),你可能要处理冲突。
-现在,你的本地主分支与上游修改的一致并是最新的。
+获取 upstream 的最新代码并更新当前分支。
-## 推送(Push)到 GitHub
+```bash
+➜ git fetch upstream
+➜ git pull upstream develop
+```
+
+## Push 到远程仓库
+
+将本地的修改推送到 GitHub 上,也就是 https://github.com/USERNAME/Paddle。
-```shell
-# 在 GitHub 上 push 你的仓库
-git push -u origin MY_COOL_STUFF_BRANCH # 创建远程分支 MY_COOL_STUFF_BRANCH 到 origin.
+```bash
+# 推送到远程仓库 origin 的 my-cool-stuff 分支上
+➜ git push origin my-cool-stuff
```
-## 拉取请求(Pull Request)
+## 建立 Issue 并完成 Pull Request
+
+建立一个 Issue 描述问题,并记录它的编号。
+
+切换到所建分支,然后点击 `New pull request`。
+
+
-转到 GitHub上 你 fork 的页面,选择你的开发分支并单击 **pull request 按钮**。
+选择目标分支:
-## 使用最新版本更新你的 pull 请求
+
-在代码审查(code review)期间,由于 baidu/Paddle 中新的提交导致你的 pull 请求可能会失效。如果没有冲突,GitHub允许自动更新。 你可以点击 pull request 页面中的“更新分支(Update Branch)”按钮。 但是如果存在代码冲突,你需要手动进行更新。你需要在本地仓库执行如下命令:
+在 PR 的描述说明中,填写 `resolve #Issue编号` 可以在这个 PR 被 merge 后,自动关闭对应的 Issue,具体请见 。
-```shell
-git checkout MY_COOL_STUFF_BRANCH
-git pull upstream develop
-# 你可能需要根据git提示解决冲突
-# 创建并测试你的代码
-git push origin MY_COOL_STUFF_BRANCH
+接下来等待 review,如果有需要修改的地方,参照上述步骤更新 origin 中的对应分支即可。
+
+## 删除远程分支
+
+在 PR 被 merge 进主仓库后,我们可以在 PR 的页面删除远程仓库的分支。
+
+
+
+也可以使用 `git push origin :分支名` 删除远程分支,如:
+
+```bash
+➜ git push origin :my-cool-stuff
```
-现在你的 Pull Request 是最新的了。
-## 修改你的 pull request
+## 删除本地分支
-当根据审阅者的意见修改 pull 请求时,请使用“git commit”而不是“git commit --amend”来提交更改,以便审阅者可以看到新的请求和旧的请求之间的区别。
+最后,删除本地分支。
-可能的命令是
+```bash
+# 切换到 develop 分支
+➜ git checkout develop
-```shell
-git checkout MY_COOL_STUFF_BRANCH
-git pull upstream develop # 将本地更新到最新的代码库
-# 可能会发生一些冲突
-# 开始开发吧!
-env EDITOR=vim git commit # 添加修改日志
-git push origin MY_COOL_STUFF_BRANCH
+# 删除 my-cool-stuff 分支
+➜ git branch -D my-cool-stuff
```
+
+至此,我们就完成了一次代码贡献的过程。
diff --git a/doc/howto/usage/k8s/k8s_basis_cn.md b/doc/howto/usage/k8s/k8s_basis_cn.md
index 6278dacb17a378da660b2f5434247efd41c995fc..4c3dc81ed38f239c1f4a83d22b49cf57b5d16a8b 100644
--- a/doc/howto/usage/k8s/k8s_basis_cn.md
+++ b/doc/howto/usage/k8s/k8s_basis_cn.md
@@ -14,7 +14,7 @@
- [*PersistentVolume*](https://kubernetes.io/docs/user-guide/persistent-volumes/): 和[*PersistentVolumeClaim*](https://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims)结合,将外部的存储服务在Kubernetes中描述成为统一的资源形式,便于存储资源管理和Pod引用。
-# 部署Kubernetes集群
+## 部署Kubernetes集群
Kubernetes提供了多种集群部署的方案,本文档内不重复介绍。这里给出集中常见的部署方法:
@@ -25,7 +25,7 @@ Kubernetes提供了多种集群部署的方案,本文档内不重复介绍。
可以参考[这个表格](https://kubernetes.io/docs/getting-started-guides/#table-of-solutions)选择适合您的场景的合适方案。
-# 选择存储方案
+## 选择存储方案
容器不会保留在运行时生成的数据,job或者应用程序在容器中运行时生成的数据会在容器销毁时消失。为了完成分布式机器学习训练任务,需要有一个外部的存储服务来保存训练所需数据和训练输出。
常见的可选存储服务包括:
@@ -35,9 +35,9 @@ Kubernetes提供了多种集群部署的方案,本文档内不重复介绍。
- [*Ceph*](http://docs.ceph.com/docs/master/): 分布式文件系统,支持rbd,POSIX API接口(ceph fs)和对象存储API,参考[这里](https://kubernetes.io/docs/user-guide/volumes/#rbd)。
- [*MooseFS*](https://moosefs.com/documentation.html): 一个分布式的存储系统。需要先挂载到服务器Node上再通过kubernetes hostPath Volume挂载到容器中。
-# 配置kubectl
+## 配置kubectl
-## 安装kubectl
+### 安装kubectl
```
# OS X
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl
@@ -49,7 +49,7 @@ curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s htt
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/windows/amd64/kubectl.exe
```
-## 配置kubectl访问你的kubernetes集群
+### 配置kubectl访问你的kubernetes集群
编辑`~/.kube/config`这个配置文件,修改`Master-IP`的地址。如果使用SSL认证,则需要配置`certificate-authority`和`users`中的用户证书。如果是使用非SSL方式访问(比如通过8080端口),也可以去掉这些证书的配置。
```
diff --git a/doc/index_cn.rst b/doc/index_cn.rst
index 460fedb5658a8ea9bbe8b602ee2b5df66502fa62..9279bac7f4b2898c18979630a8d6dfcb2dba70e0 100644
--- a/doc/index_cn.rst
+++ b/doc/index_cn.rst
@@ -5,7 +5,6 @@ PaddlePaddle 文档
:maxdepth: 1
getstarted/index_cn.rst
- tutorials/index_cn.md
howto/index_cn.rst
api/index_cn.rst
faq/index_cn.rst
diff --git a/doc/index_en.rst b/doc/index_en.rst
index 1d9cca7de720ebc23fe816f32d158930d91c07e7..168c7667c61da677905585d6c4b5037ce80b3765 100644
--- a/doc/index_en.rst
+++ b/doc/index_en.rst
@@ -5,8 +5,6 @@ PaddlePaddle Documentation
:maxdepth: 1
getstarted/index_en.rst
- tutorials/index_en.md
howto/index_en.rst
api/index_en.rst
about/index_en.rst
-
\ No newline at end of file
diff --git a/doc_theme/templates/layout.html b/doc_theme/templates/layout.html
index 034740369ed10a748856e2205d3315f51a7de62f..65e61c5f298e19adc6330c378779a6edf418752e 100644
--- a/doc_theme/templates/layout.html
+++ b/doc_theme/templates/layout.html
@@ -114,10 +114,7 @@
@@ -137,7 +134,7 @@
{{ toctree }}
{% endblock %}
- {% if toc %}
+ {% if False %}
{% endif %}
@@ -168,7 +165,8 @@
VERSION:'{{ release|e }}',
COLLAPSE_INDEX:false,
FILE_SUFFIX:'{{ '' if no_search_suffix else file_suffix }}',
- HAS_SOURCE: {{ has_source|lower }}
+ HAS_SOURCE: {{ has_source|lower }},
+ SOURCELINK_SUFFIX: ".txt",
};
{%- for scriptfile in script_files %}
diff --git a/paddle/gserver/tests/sequence_layer_group.conf b/paddle/gserver/tests/sequence_layer_group.conf
index 68d150d553588c864de56ce1e6f283cc42fbbf2f..50f2d89d0271b2eaa460e57636eb09b6d6aeda18 100644
--- a/paddle/gserver/tests/sequence_layer_group.conf
+++ b/paddle/gserver/tests/sequence_layer_group.conf
@@ -48,8 +48,7 @@ lstm = lstmemory_group(
size=hidden_dim,
act=TanhActivation(),
gate_act=SigmoidActivation(),
- state_act=TanhActivation(),
- lstm_layer_attr=ExtraLayerAttribute(error_clipping_threshold=50))
+ state_act=TanhActivation())
lstm_last = last_seq(input=lstm)
diff --git a/paddle/gserver/tests/sequence_nest_layer_group.conf b/paddle/gserver/tests/sequence_nest_layer_group.conf
index 88cb42798baff79fa6a86ef11dabf1781575c0b4..c01b95f7a29ae73c2b3ccd5b56ad1d316cbc72ec 100644
--- a/paddle/gserver/tests/sequence_nest_layer_group.conf
+++ b/paddle/gserver/tests/sequence_nest_layer_group.conf
@@ -51,8 +51,7 @@ def lstm_group(lstm_group_input):
size=hidden_dim,
act=TanhActivation(),
gate_act=SigmoidActivation(),
- state_act=TanhActivation(),
- lstm_layer_attr=ExtraLayerAttribute(error_clipping_threshold=50))
+ state_act=TanhActivation())
return lstm_output
diff --git a/paddle/scripts/travis/docs.sh b/paddle/scripts/travis/docs.sh
index c784293695bf134b5e990639778b6e84ba45d00d..67b89adb4ddb7bb93cb776d64711078cb11a2784 100755
--- a/paddle/scripts/travis/docs.sh
+++ b/paddle/scripts/travis/docs.sh
@@ -60,6 +60,7 @@ function deploy_docs() {
deploy_docs "master" "."
deploy_docs "develop" "./develop/"
+deploy_docs "release/0.10.0" "./release/0.10.0/"
# Check is there anything changed.
set +e
diff --git a/paddle/trainer/tests/CMakeLists.txt b/paddle/trainer/tests/CMakeLists.txt
index c5c76a030d9e5f1deed63454b408442954ef5eae..08b2d8a38e2d20a357752269bd3ee3f515116abd 100644
--- a/paddle/trainer/tests/CMakeLists.txt
+++ b/paddle/trainer/tests/CMakeLists.txt
@@ -17,14 +17,17 @@ add_test(NAME test_Trainer
WORKING_DIRECTORY ${PROJ_ROOT}/paddle/)
############### test_TrainerOnePass ##########################
-add_unittest_without_exec(test_TrainerOnePass
- test_TrainerOnePass.cpp)
-add_test(NAME test_TrainerOnePass
- COMMAND ${PROJ_ROOT}/paddle/.set_python_path.sh -d
- ${PROJ_ROOT}/python/:${PROJ_ROOT}/paddle/trainer/tests
- ${PROJ_ROOT}/paddle/.set_port.sh -p port ${CMAKE_CURRENT_BINARY_DIR}/test_TrainerOnePass
- WORKING_DIRECTORY ${PROJ_ROOT}/paddle/)
-
+if(WITH_PYTHON)
+ # only run test_TrainerOnePass when PYTHON is enabled, because train one pass
+ # is using PyDataProvider2.
+ add_unittest_without_exec(test_TrainerOnePass
+ test_TrainerOnePass.cpp)
+ add_test(NAME test_TrainerOnePass
+ COMMAND ${PROJ_ROOT}/paddle/.set_python_path.sh -d
+ ${PROJ_ROOT}/python/:${PROJ_ROOT}/paddle/trainer/tests
+ ${PROJ_ROOT}/paddle/.set_port.sh -p port ${CMAKE_CURRENT_BINARY_DIR}/test_TrainerOnePass
+ WORKING_DIRECTORY ${PROJ_ROOT}/paddle/)
+endif()
################ test_CompareTwoNets ######################
add_unittest_without_exec(test_CompareTwoNets
test_CompareTwoNets.cpp)
diff --git a/python/CMakeLists.txt b/python/CMakeLists.txt
index e7a0895533dd8902df9a012ab230df2a67256483..bfa19d5ecc84a08614852c4c93de5b5793c1be9c 100644
--- a/python/CMakeLists.txt
+++ b/python/CMakeLists.txt
@@ -24,9 +24,12 @@ add_custom_target(paddle_python ALL DEPENDS
${OUTPUT_DIR}/.timestamp)
add_subdirectory(paddle/trainer_config_helpers/tests)
-add_subdirectory(paddle/v2/tests)
-add_subdirectory(paddle/v2/reader/tests)
-add_subdirectory(paddle/v2/plot/tests)
+if (WITH_SWIG_PY)
+ # enable v2 API unittest only when paddle swig api is compiled
+ add_subdirectory(paddle/v2/tests)
+ add_subdirectory(paddle/v2/reader/tests)
+ add_subdirectory(paddle/v2/plot/tests)
+endif()
install(DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/dist/
DESTINATION opt/paddle/share/wheels
diff --git a/python/paddle/trainer_config_helpers/attrs.py b/python/paddle/trainer_config_helpers/attrs.py
index bf0208834600fef3bcf1b0496da8f5f77aea44c5..7b76e87f045e638d0a78e1ef5a191d465b7d79d7 100644
--- a/python/paddle/trainer_config_helpers/attrs.py
+++ b/python/paddle/trainer_config_helpers/attrs.py
@@ -208,12 +208,15 @@ class ExtraLayerAttribute(object):
drop_rate=None,
device=None):
self.attr = dict()
- if isinstance(error_clipping_threshold, float):
- assert error_clipping_threshold > 0
- self.attr["error_clipping_threshold"] = error_clipping_threshold
-
- if isinstance(drop_rate, float):
- assert drop_rate > 0
+ if error_clipping_threshold is not None:
+ error_clipping_threshold = float(error_clipping_threshold)
+ if error_clipping_threshold < 0:
+ raise ValueError("Error clipping must > 0")
+ self.attr['error_clipping_threshold'] = error_clipping_threshold
+ if drop_rate is not None:
+ drop_rate = float(drop_rate)
+ if drop_rate < 0:
+ raise ValueError("Dropout rate must > 0")
self.attr["drop_rate"] = drop_rate
if isinstance(device, int):
diff --git a/python/paddle/trainer_config_helpers/layers.py b/python/paddle/trainer_config_helpers/layers.py
index f906126d87941b649e364e317dde97f64f323b13..3a0c5cb27c321d3722c2bb87c47b8b6cfd4d2a44 100755
--- a/python/paddle/trainer_config_helpers/layers.py
+++ b/python/paddle/trainer_config_helpers/layers.py
@@ -84,6 +84,7 @@ __all__ = [
'GeneratedInput',
'SubsequenceInput',
'gru_step_layer',
+ 'gru_step_naive_layer',
'recurrent_layer',
'BaseGeneratedInput',
'conv_operator',
@@ -2284,7 +2285,7 @@ def img_pool_layer(input,
type_name = pool_type.name + '-projection' \
if (
- isinstance(pool_type, AvgPooling) or isinstance(pool_type, MaxPooling)) \
+ isinstance(pool_type, AvgPooling) or isinstance(pool_type, MaxPooling)) \
else pool_type.name
pool_size_y = pool_size if pool_size_y is None else pool_size_y
@@ -3084,6 +3085,78 @@ def gru_step_layer(input,
activation=act)
+@wrap_bias_attr_default()
+@wrap_param_attr_default()
+@wrap_act_default(param_names=['gate_act'], act=SigmoidActivation())
+@wrap_act_default(act=TanhActivation())
+@wrap_name_default('gru_step_naive')
+@layer_support(ERROR_CLIPPING, DROPOUT)
+def gru_step_naive_layer(input,
+ output_mem,
+ size=None,
+ name=None,
+ act=None,
+ gate_act=None,
+ bias_attr=None,
+ param_attr=None,
+ layer_attr=None):
+ """
+ GRU Step Layer, but using MixedLayer to generate. It support ERROR_CLIPPING
+ and DROPOUT.
+
+ :param input:
+ :param output_mem:
+ :param size:
+ :param name:
+ :param act:
+ :param gate_act:
+ :param bias_attr:
+ :param param_attr:
+ :param layer_attr:
+ :return:
+ """
+ if input.size % 3 != 0:
+ raise ValueError("GruStep input size must be divided by 3")
+ if size is None:
+ size = input.size / 3
+
+ def __gate__(gate_name, offset):
+ with mixed_layer(
+ name=name + "_" + gate_name,
+ size=size,
+ layer_attr=layer_attr,
+ bias_attr=bias_attr,
+ act=gate_act) as gate:
+ gate += identity_projection(input=input, offset=offset)
+ gate += full_matrix_projection(
+ input=output_mem, param_attr=param_attr)
+ return gate
+
+ update_gate = __gate__("update", 0)
+ reset_gate = __gate__("reset", size)
+
+ with mixed_layer(
+ name=name + "_reset_output", bias_attr=False) as reset_output:
+ reset_output += dotmul_operator(a=output_mem, b=reset_gate)
+
+ with mixed_layer(
+ name=name + "_output_candidate",
+ size=size,
+ layer_attr=layer_attr,
+ bias_attr=bias_attr,
+ act=act) as output_candidate:
+ output_candidate += identity_projection(input=input, offset=2 * size)
+ output_candidate += full_matrix_projection(
+ input=reset_output, param_attr=param_attr)
+
+ with mixed_layer(name=name) as output:
+ output += identity_projection(output_mem)
+ output += dotmul_operator(a=output_mem, b=update_gate, scale=-1.0)
+ output += dotmul_operator(a=output_candidate, b=update_gate)
+
+ return output
+
+
@wrap_name_default()
@layer_support()
def get_output_layer(input, arg_name, name=None, layer_attr=None):
diff --git a/python/paddle/trainer_config_helpers/networks.py b/python/paddle/trainer_config_helpers/networks.py
index cadde11ff81658cb309cd1bf7a44bac6374c1e44..fb533a47e0b0585be6f0e019086993f8b3aa7f38 100755
--- a/python/paddle/trainer_config_helpers/networks.py
+++ b/python/paddle/trainer_config_helpers/networks.py
@@ -825,7 +825,8 @@ def gru_unit(input,
gru_param_attr=None,
act=None,
gate_act=None,
- gru_layer_attr=None):
+ gru_layer_attr=None,
+ naive=False):
"""
Define calculations that a gated recurrent unit performs in a single time
step. This function itself is not a recurrent layer, so that it can not be
@@ -857,7 +858,12 @@ def gru_unit(input,
out_mem = memory(name=name, size=size)
- gru_out = gru_step_layer(
+ if naive:
+ __step__ = gru_step_naive_layer
+ else:
+ __step__ = gru_step_layer
+
+ gru_out = __step__(
name=name,
input=input,
output_mem=out_mem,
@@ -879,7 +885,8 @@ def gru_group(input,
gru_param_attr=None,
act=None,
gate_act=None,
- gru_layer_attr=None):
+ gru_layer_attr=None,
+ naive=False):
"""
gru_group is a recurrent layer group version of Gated Recurrent Unit. It
does exactly the same calculation as the grumemory layer does. A promising
@@ -928,7 +935,8 @@ def gru_group(input,
gru_param_attr=gru_param_attr,
act=act,
gate_act=gate_act,
- gru_layer_attr=gru_layer_attr)
+ gru_layer_attr=gru_layer_attr,
+ naive=naive)
return recurrent_group(
name='%s_recurrent_group' % name,
@@ -949,7 +957,8 @@ def simple_gru(input,
gru_param_attr=None,
act=None,
gate_act=None,
- gru_layer_attr=None):
+ gru_layer_attr=None,
+ naive=False):
"""
You maybe see gru_step_layer, grumemory in layers.py, gru_unit, gru_group,
simple_gru in network.py. The reason why there are so many interfaces is
@@ -1018,7 +1027,8 @@ def simple_gru(input,
gru_param_attr=gru_param_attr,
act=act,
gate_act=gate_act,
- gru_layer_attr=gru_layer_attr)
+ gru_layer_attr=gru_layer_attr,
+ naive=naive)
@wrap_name_default('simple_gru2')
diff --git a/python/paddle/trainer_config_helpers/tests/configs/protostr/projections.protostr b/python/paddle/trainer_config_helpers/tests/configs/protostr/projections.protostr
index 2afc3afef6d39ce9b8eef05948861284775d5011..d8bd7b9dfb71a392d0dc53872a0d72f47530530f 100644
--- a/python/paddle/trainer_config_helpers/tests/configs/protostr/projections.protostr
+++ b/python/paddle/trainer_config_helpers/tests/configs/protostr/projections.protostr
@@ -320,6 +320,7 @@ layers {
}
}
drop_rate: 0.5
+ error_clipping_threshold: 40.0
}
parameters {
name: "___embedding_0__.w0"
diff --git a/python/paddle/v2/layer.py b/python/paddle/v2/layer.py
index 384de9b9d57f88e84ab6067846174bb037502dc0..89cca7acd34b8dea0572169338649b5e9ff6536a 100644
--- a/python/paddle/v2/layer.py
+++ b/python/paddle/v2/layer.py
@@ -356,6 +356,9 @@ def mixed(size=0,
return MixedLayerV2(size, input, name, act, bias_attr, layer_attr)
+mixed.__doc__ = conf_helps.mixed_layer.__doc__
+
+
class RecurrentLayerInput(Layer):
def __init__(self, recurrent_name, index, parent_layers):
parents_len = len(parent_layers)
@@ -404,6 +407,8 @@ data.__name__ = 'data'
AggregateLevel = conf_helps.layers.AggregateLevel
ExpandLevel = conf_helps.layers.ExpandLevel
memory = MemoryV2
+memory.__name__ = 'memory'
+memory.__doc__ = conf_helps.memory.__doc__
def __layer_name_mapping__(inname):
@@ -512,6 +517,9 @@ def recurrent_group(step, input, name=None):
return retv
+recurrent_group.__doc__ = conf_helps.recurrent_group.__doc__
+
+
@wrap_name_default()
def beam_search(step,
input,
@@ -579,6 +587,8 @@ def beam_search(step,
return tmp
+beam_search.__doc__ = conf_helps.beam_search.__doc__
+
__projection_names__ = filter(lambda x: x.endswith('_projection'),
dir(conf_helps))