提交 ea3be884 编写于 作者: L Luo Tao

Merge branch 'develop' into fluid_python

...@@ -22,7 +22,8 @@ COPY ./paddle/scripts/docker/root/ /root/ ...@@ -22,7 +22,8 @@ COPY ./paddle/scripts/docker/root/ /root/
RUN apt-get update && \ RUN apt-get update && \
apt-get install -y \ apt-get install -y \
git python-pip python-dev openssh-server bison libnccl-dev \ git python-pip python-dev openssh-server bison \
libnccl2=2.1.2-1+cuda8.0 libnccl-dev=2.1.2-1+cuda8.0 \
wget unzip unrar tar xz-utils bzip2 gzip coreutils ntp \ wget unzip unrar tar xz-utils bzip2 gzip coreutils ntp \
curl sed grep graphviz libjpeg-dev zlib1g-dev \ curl sed grep graphviz libjpeg-dev zlib1g-dev \
python-matplotlib gcc-4.8 g++-4.8 \ python-matplotlib gcc-4.8 g++-4.8 \
......
API Overview V2 API Overview
============ ================
TBD The PaddlePaddle V2 API is designed to provide a modern user interface for PaddlePaddle V1(the original layer-based platform of PaddlePaddle),
it proposes some high-level concepts such as `Layers <http://www.paddlepaddle.org/docs/develop/api/en/v2/config/layer.html>`_ , `Optimizer <http://www.paddlepaddle.org/docs/develop/api/en/v2/config/optimizer.html>`_ , `Evaluator <http://www.paddlepaddle.org/docs/develop/api/en/v2/config/evaluators.html>`_ and `Data Reader <http://www.paddlepaddle.org/docs/develop/api/en/v2/data/data_reader.html>`_ to make the model configuration more familiar to users.
A model is composed of the computation described by a group of `Layers`, with `Evaluator` to define the error, `Optimizer` to update the parameters and `Data Reader` to feed in the data.
We also provide the `interface for Training and Inference <http://www.paddlepaddle.org/docs/develop/api/en/v2/run_logic.html>`_ to help control the training and inference phrase,
it has several easy to use methods
- `paddle.train`
- `paddle.test`
- `paddle.infer`
to better expose the internal running details, different `events <http://www.paddlepaddle.org/docs/develop/api/en/v2/run_logic.html#event>`_ are available to users by writing some callbacks.
...@@ -129,9 +129,6 @@ TEST(NCCL, all_reduce) { ...@@ -129,9 +129,6 @@ TEST(NCCL, all_reduce) {
} // namespace paddle } // namespace paddle
int main(int argc, char** argv) { int main(int argc, char** argv) {
// FIXME(tonyyang-svail):
// Due to the driver issue on our CI, disable for now
return 0;
dev_count = paddle::platform::GetCUDADeviceCount(); dev_count = paddle::platform::GetCUDADeviceCount();
if (dev_count <= 1) { if (dev_count <= 1) {
LOG(WARNING) LOG(WARNING)
......
...@@ -171,7 +171,7 @@ EOF ...@@ -171,7 +171,7 @@ EOF
EOF EOF
if [[ ${WITH_GPU} == "ON" ]]; then if [[ ${WITH_GPU} == "ON" ]]; then
NCCL_DEPS="apt-get install -y libnccl-dev &&" NCCL_DEPS="apt-get install -y libnccl2=2.1.2-1+cuda8.0 libnccl-dev=2.1.2-1+cuda8.0 &&"
else else
NCCL_DEPS="" NCCL_DEPS=""
fi fi
......
...@@ -1519,21 +1519,21 @@ def batch_norm(input, ...@@ -1519,21 +1519,21 @@ def batch_norm(input,
bias = helper.create_parameter( bias = helper.create_parameter(
attr=helper.bias_attr, shape=param_shape, dtype=dtype, is_bias=True) attr=helper.bias_attr, shape=param_shape, dtype=dtype, is_bias=True)
mean = helper.create_global_variable( mean = helper.create_parameter(
name=moving_mean_name, attr=ParamAttr(
dtype=input.dtype, name=moving_mean_name, initializer=Constant(0.0), trainable=False),
shape=param_shape, shape=param_shape,
persistable=True, dtype=input.dtype)
stop_gradient=True) mean.stop_gradient = True
helper.set_variable_initializer(var=mean, initializer=Constant(0.0))
variance = helper.create_global_variable( variance = helper.create_parameter(
name=moving_variance_name, attr=ParamAttr(
dtype=input.dtype, name=moving_variance_name,
initializer=Constant(1.0),
trainable=False),
shape=param_shape, shape=param_shape,
persistable=True, dtype=input.dtype)
stop_gradient=True) variance.stop_gradient = True
helper.set_variable_initializer(var=variance, initializer=Constant(1.0))
# create output # create output
# mean and mean_out share the same memory # mean and mean_out share the same memory
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册