提交 194a3b8a 编写于 作者: L liaogang

Revise docker docs

...@@ -221,3 +221,7 @@ ENDIF(PYTHONLIBS_FOUND AND PYTHONINTERP_FOUND) ...@@ -221,3 +221,7 @@ ENDIF(PYTHONLIBS_FOUND AND PYTHONINTERP_FOUND)
INCLUDE_DIRECTORIES(${PYTHON_INCLUDE_DIR}) INCLUDE_DIRECTORIES(${PYTHON_INCLUDE_DIR})
INCLUDE_DIRECTORIES(${PYTHON_NUMPY_INCLUDE_DIR}) INCLUDE_DIRECTORIES(${PYTHON_NUMPY_INCLUDE_DIR})
IF(NOT WITH_PYTHON)
SET(PYTHON_LIBRARIES "")
ENDIF()
...@@ -109,6 +109,12 @@ sum_to_one_norm ...@@ -109,6 +109,12 @@ sum_to_one_norm
:members: sum_to_one_norm :members: sum_to_one_norm
:noindex: :noindex:
cross_channel_norm
------------------
.. automodule:: paddle.v2.layer
:members: cross_channel_norm
:noindex:
Recurrent Layers Recurrent Layers
================ ================
......
...@@ -4,49 +4,83 @@ PaddlePaddle的Docker容器使用方式 ...@@ -4,49 +4,83 @@ PaddlePaddle的Docker容器使用方式
PaddlePaddle目前唯一官方支持的运行的方式是Docker容器。因为Docker能在所有主要操作系统(包括Linux,Mac OS X和Windows)上运行。 请注意,您需要更改 `Dockers设置 <https://github.com/PaddlePaddle/Paddle/issues/627>`_ 才能充分利用Mac OS X和Windows上的硬件资源。 PaddlePaddle目前唯一官方支持的运行的方式是Docker容器。因为Docker能在所有主要操作系统(包括Linux,Mac OS X和Windows)上运行。 请注意,您需要更改 `Dockers设置 <https://github.com/PaddlePaddle/Paddle/issues/627>`_ 才能充分利用Mac OS X和Windows上的硬件资源。
纯CPU和GPU的docker镜像使用说明 PaddlePaddle发布的docker镜像使用说明
------------------------------ ------------------------------
对于每一个PaddlePaddle版本,我们都会发布两个Docker镜像:纯CPU的和GPU的 对于每一个PaddlePaddle版本,我们都会发布两种Docker镜像:开发镜像、运行镜像。运行镜像包括纯CPU版本和GPU版本以及其对应的非AVX版本
我们通过设置 `dockerhub.com <https://hub.docker.com/r/paddledev/paddle/>`_ 自动生成最新的docker镜像: 我们会在 `dockerhub.com <https://hub.docker.com/r/paddledev/paddle/>`_ 提供最新的docker镜像,可以在"tags"标签下找到最新的Paddle镜像版本。
`paddledev/paddle:0.10.0rc1-cpu` 和 `paddledev/paddle:0.10.0rc1-gpu`。 1. 开发镜像::code:`paddlepaddle/paddle:<version>-dev`
以交互容器方式运行纯CPU的镜像: 这个镜像包含了Paddle相关的开发工具以及编译和运行环境。用户可以使用开发镜像代替配置本地环境,完成开发,编译,发布,
文档编写等工作。由于不同的Paddle的版本可能需要不同的依赖和工具,所以如果需要自行配置开发环境需要考虑版本的因素。
开发镜像包含了以下工具:
- gcc/clang
- nvcc
- Python
- sphinx
- woboq
- sshd
很多开发者会使用远程的安装有GPU的服务器工作,用户可以使用ssh登录到这台服务器上并执行 :code:`docker exec`进入开发镜像并开始工作,
也可以在开发镜像中启动一个SSHD服务,方便开发者直接登录到镜像中进行开发:
.. code-block:: bash 以交互容器方式运行开发镜像:
docker run -it --rm paddledev/paddle:0.10.0rc1-cpu /bin/bash .. code-block:: bash
或者,可以以后台进程方式运行容器: docker run -it --rm paddledev/paddle:<version>-dev /bin/bash
.. code-block:: bash 或者,可以以后台进程方式运行容器:
docker run -d -p 2202:22 -p 8888:8888 paddledev/paddle:0.10.0rc1-cpu .. code-block:: bash
然后用密码 :code:`root` SSH进入容器: docker run -d -p 2202:22 -p 8888:8888 paddledev/paddle:<version>-dev
.. code-block:: bash 然后用密码 :code:`root` SSH进入容器:
.. code-block:: bash
ssh -p 2202 root@localhost ssh -p 2202 root@localhost
SSH方式的一个优点是我们可以从多个终端进入容器。比如,一个终端运行vi,另一个终端运行Python。另一个好处是我们可以把PaddlePaddle容器运行在远程服务器上,并在笔记本上通过SSH与其连接。 SSH方式的一个优点是我们可以从多个终端进入容器。比如,一个终端运行vi,另一个终端运行Python。另一个好处是我们可以把PaddlePaddle容器运行在远程服务器上,并在笔记本上通过SSH与其连接。
2. 运行镜像:根据CPU、GPU和非AVX区分了如下4个镜像:
- GPU/AVX::code:`paddlepaddle/paddle:<version>-gpu`
- GPU/no-AVX::code:`paddlepaddle/paddle:<version>-gpu-noavx`
- CPU/AVX::code:`paddlepaddle/paddle:<version>`
- CPU/no-AVX::code:`paddlepaddle/paddle:<version>-noavx`
以上方法在GPU镜像里也能用,只是请不要忘记提前在物理机上安装GPU最新驱动。 纯CPU镜像以及GPU镜像都会用到AVX指令集,但是2008年之前生产的旧电脑不支持AVX。以下指令能检查Linux电脑是否支持AVX:
为了保证GPU驱动能够在镜像里面正常运行,我们推荐使用[nvidia-docker](https://github.com/NVIDIA/nvidia-docker)来运行镜像。
.. code-block:: bash .. code-block:: bash
if cat /proc/cpuinfo | grep -i avx; then echo Yes; else echo No; fi
如果输出是No,就需要选择使用no-AVX的镜像
以上方法在GPU镜像里也能用,只是请不要忘记提前在物理机上安装GPU最新驱动。
为了保证GPU驱动能够在镜像里面正常运行,我们推荐使用[nvidia-docker](https://github.com/NVIDIA/nvidia-docker)来运行镜像。
.. code-block:: bash
nvidia-docker run -it --rm paddledev/paddle:0.10.0rc1-gpu /bin/bash nvidia-docker run -it --rm paddledev/paddle:0.10.0rc1-gpu /bin/bash
**注意**: 如果使用nvidia-docker存在问题,你也许可以尝试更老的方法,具体如下,但是我们并不推荐这种方法。 注意: 如果使用nvidia-docker存在问题,你也许可以尝试更老的方法,具体如下,但是我们并不推荐这种方法。:
.. code-block:: bash .. code-block:: bash
export CUDA_SO="$(\ls /usr/lib64/libcuda* | xargs -I{} echo '-v {}:{}') $(\ls /usr/lib64/libnvidia* | xargs -I{} echo '-v {}:{}')" export CUDA_SO="$(\ls /usr/lib64/libcuda* | xargs -I{} echo '-v {}:{}') $(\ls /usr/lib64/libnvidia* | xargs -I{} echo '-v {}:{}')"
export DEVICES=$(\ls /dev/nvidia* | xargs -I{} echo '--device {}:{}') export DEVICES=$(\ls /dev/nvidia* | xargs -I{} echo '--device {}:{}')
docker run ${CUDA_SO} ${DEVICES} -it paddledev/paddle:0.10.0rc1-gpu docker run ${CUDA_SO} ${DEVICES} -it paddledev/paddle:<version>-gpu
3. 使用运行镜像发布你的AI程序
假设您已经完成了一个AI训练的python程序 :code:`a.py`,这个程序是您在开发机上使用开发镜像完成开发。此时您可以运行这个命令在开发机上进行测试运行:
.. code-block:: bash
docker run -it -v $PWD:/work paddle /work/a.py
这里`a.py`包含的所有依赖假设都可以在Paddle的运行容器中。如果需要包含更多的依赖、或者需要发布您的应用的镜像,可以编写`Dockerfile`使用`FROM paddledev/paddle:<version>`
创建和发布自己的AI程序镜像。
运行PaddlePaddle书籍 运行PaddlePaddle书籍
--------------------- ---------------------
...@@ -56,11 +90,11 @@ Jupyter Notebook是一个开源的web程序,大家可以通过它制作和分 ...@@ -56,11 +90,11 @@ Jupyter Notebook是一个开源的web程序,大家可以通过它制作和分
PaddlePaddle书籍是为用户和开发者制作的一个交互式的Jupyter Nodebook。 PaddlePaddle书籍是为用户和开发者制作的一个交互式的Jupyter Nodebook。
如果您想要更深入了解deep learning,PaddlePaddle书籍一定是您最好的选择。 如果您想要更深入了解deep learning,PaddlePaddle书籍一定是您最好的选择。
当您进入容器内之后,只用运行以下命令 我们提供可以直接运行PaddlePaddle书籍的docker镜像,直接运行
.. code-block:: bash .. code-block:: bash
jupyter notebook docker run -p 8888:8888 paddlepaddle/book
然后在浏览器中输入以下网址: 然后在浏览器中输入以下网址:
...@@ -70,46 +104,25 @@ PaddlePaddle书籍是为用户和开发者制作的一个交互式的Jupyter Nod ...@@ -70,46 +104,25 @@ PaddlePaddle书籍是为用户和开发者制作的一个交互式的Jupyter Nod
就这么简单,享受您的旅程! 就这么简单,享受您的旅程!
非AVX镜像
---------
纯CPU镜像以及GPU镜像都会用到AVX指令集,但是2008年之前生产的旧电脑不支持AVX。以下指令能检查Linux电脑是否支持AVX:
.. code-block:: bash
if cat /proc/cpuinfo | grep -i avx; then echo Yes; else echo No; fi
如果输出是No,我们就需要手动编译一个非AVX版本的镜像:
.. code-block:: bash
cd ~
git clone https://github.com/PaddlePaddle/Paddle.git
cd Paddle
docker build --build-arg WITH_AVX=OFF -t paddle:cpu-noavx -f paddle/scripts/docker/Dockerfile .
docker build --build-arg WITH_AVX=OFF -t paddle:gpu-noavx -f paddle/scripts/docker/Dockerfile.gpu .
通过Docker容器开发PaddlePaddle 通过Docker容器开发PaddlePaddle
------------------------------ ------------------------------
开发人员可以在Docker中开发PaddlePaddle。这样开发人员可以以一致的方式在不同的平台上工作 - Linux,Mac OS X和Windows。 开发人员可以在Docker开发镜像中开发PaddlePaddle。这样开发人员可以以一致的方式在不同的平台上工作 - Linux,Mac OS X和Windows。
1. 将开发环境构建为Docker镜像 1. 构建开发镜像
.. code-block:: bash .. code-block:: bash
git clone --recursive https://github.com/PaddlePaddle/Paddle git clone --recursive https://github.com/PaddlePaddle/Paddle
cd Paddle cd Paddle
docker build -t paddle:dev -f paddle/scripts/docker/Dockerfile . docker build -t paddle:dev .
请注意,默认情况下,:code:`docker build` 不会将源码导入到镜像中并编译它。如果我们想这样做,需要设置一个参数 请注意,默认情况下,:code:`docker build` 不会将源码导入到镜像中并编译它。如果我们想这样做,需要构建完开发镜像,然后执行
.. code-block:: bash .. code-block:: bash
docker build -t paddle:dev -f paddle/scripts/docker/Dockerfile --build-arg BUILD_AND_INSTALL=ON . docker run -v $PWD:/paddle -e "WITH_GPU=OFF" -e "WITH_AVX=ON" -e "TEST=OFF" paddle:dev
2. 运行开发环境 2. 运行开发环境
...@@ -118,11 +131,11 @@ PaddlePaddle书籍是为用户和开发者制作的一个交互式的Jupyter Nod ...@@ -118,11 +131,11 @@ PaddlePaddle书籍是为用户和开发者制作的一个交互式的Jupyter Nod
.. code-block:: bash .. code-block:: bash
docker run -d -p 2202:22 -v $PWD:/paddle paddle:dev docker run -d -p 2202:22 -v $PWD:/paddle paddle:dev sshd
以上代码会启动一个带有PaddlePaddle开发环境的docker容器,源代码会被挂载到 :code:`/paddle` 。 以上代码会启动一个带有PaddlePaddle开发环境的docker容器,源代码会被挂载到 :code:`/paddle` 。
请注意, :code:`paddle:dev` 的默认入口是 :code:`sshd` 。以上的 :code:`docker run` 命令其实会启动一个在2202端口监听的SSHD服务器。这样,我们就能SSH进入我们的开发容器了: 以上的 :code:`docker run` 命令其实会启动一个在2202端口监听的SSHD服务器。这样,我们就能SSH进入我们的开发容器了:
.. code-block:: bash .. code-block:: bash
...@@ -147,14 +160,14 @@ PaddlePaddle书籍是为用户和开发者制作的一个交互式的Jupyter Nod ...@@ -147,14 +160,14 @@ PaddlePaddle书籍是为用户和开发者制作的一个交互式的Jupyter Nod
文档 文档
---- ----
Paddle的Docker镜像带有一个通过 `woboq code browser Paddle的Docker开发镜像带有一个通过 `woboq code browser
<https://github.com/woboq/woboq_codebrowser>`_ 生成的HTML版本的C++源代码,便于用户浏览C++源码。 <https://github.com/woboq/woboq_codebrowser>`_ 生成的HTML版本的C++源代码,便于用户浏览C++源码。
只要在Docker里启动PaddlePaddle的时候给它一个名字,就可以再运行另一个Nginx Docker镜像来服务HTML代码: 只要在Docker里启动PaddlePaddle的时候给它一个名字,就可以再运行另一个Nginx Docker镜像来服务HTML代码:
.. code-block:: bash .. code-block:: bash
docker run -d --name paddle-cpu-doc paddle:0.10.0rc1-cpu docker run -d --name paddle-cpu-doc paddle:<version>-dev
docker run -d --volumes-from paddle-cpu-doc -p 8088:80 nginx docker run -d --volumes-from paddle-cpu-doc -p 8088:80 nginx
接着我们就能够打开浏览器在 http://localhost:8088/paddle/ 浏览代码。 接着我们就能够打开浏览器在 http://localhost:8088/paddle/ 浏览代码。
...@@ -12,52 +12,98 @@ of your hardware resource on Mac OS X and Windows. ...@@ -12,52 +12,98 @@ of your hardware resource on Mac OS X and Windows.
Usage of CPU-only and GPU Images Usage of CPU-only and GPU Images
---------------------------------- ----------------------------------
For each version of PaddlePaddle, we release 2 Docker images, a For each version of PaddlePaddle, we release 2 types of Docker images: development
CPU-only one and a CUDA GPU one. We do so by configuring image and production image. Production image includes CPU-only version and a CUDA
`dockerhub.com <https://hub.docker.com/r/paddledev/paddle/>`_ GPU version and their no-AVX versions. We put the docker images on
automatically generate the latest docker images `paddledev/paddle:0.10.0rc1-cpu` `dockerhub.com <https://hub.docker.com/r/paddledev/paddle/>`_. You can find the
and `paddledev/paddle:0.10.0rc1-gpu`. latest versions under "tags" tab at dockerhub.com.
1. development image :code:`paddlepaddle/paddle:<version>-dev`
This image has packed related develop tools and runtime environment. Users and
developers can use this image instead of their own local computer to accomplish
development, build, releasing, document writing etc. While different version of
paddle may depends on different version of libraries and tools, if you want to
setup a local environment, you must pay attention to the versions.
The development image contains:
- gcc/clang
- nvcc
- Python
- sphinx
- woboq
- sshd
Many developers use servers with GPUs, they can use ssh to login to the server
and run :code:`docker exec` to enter the docker container and start their work.
Also they can start a development docker image with SSHD service, so they can login to
the container and start work.
To run the CPU-only image as an interactive container:
To run the CPU-only image as an interactive container: .. code-block:: bash
.. code-block:: bash
docker run -it --rm paddledev/paddle:0.10.0rc1-cpu /bin/bash docker run -it --rm paddledev/paddle:<version> /bin/bash
or, we can run it as a daemon container or, we can run it as a daemon container
.. code-block:: bash .. code-block:: bash
docker run -d -p 2202:22 -p 8888:8888 paddledev/paddle:0.10.0rc1-cpu docker run -d -p 2202:22 -p 8888:8888 paddledev/paddle:<version>
and SSH to this container using password :code:`root`: and SSH to this container using password :code:`root`:
.. code-block:: bash .. code-block:: bash
ssh -p 2202 root@localhost ssh -p 2202 root@localhost
An advantage of using SSH is that we can connect to PaddlePaddle from An advantage of using SSH is that we can connect to PaddlePaddle from
more than one terminals. For example, one terminal running vi and more than one terminals. For example, one terminal running vi and
another one running Python interpreter. Another advantage is that we another one running Python interpreter. Another advantage is that we
can run the PaddlePaddle container on a remote server and SSH to it can run the PaddlePaddle container on a remote server and SSH to it
from a laptop. from a laptop.
Above methods work with the GPU image too -- just please don't forget
to install GPU driver. To support GPU driver, we recommend to use
[nvidia-docker](https://github.com/NVIDIA/nvidia-docker). Run using
.. code-block:: bash 2. Production images, this image might have multiple variants:
- GPU/AVX::code:`paddlepaddle/paddle:<version>-gpu`
- GPU/no-AVX::code:`paddlepaddle/paddle:<version>-gpu-noavx`
- CPU/AVX::code:`paddlepaddle/paddle:<version>`
- CPU/no-AVX::code:`paddlepaddle/paddle:<version>-noavx`
nvidia-docker run -it --rm paddledev/paddle:0.10.0rc1-gpu /bin/bash Please be aware that the CPU-only and the GPU images both use the AVX
instruction set, but old computers produced before 2008 do not support
AVX. The following command checks if your Linux computer supports
AVX:
.. code-block:: bash
if cat /proc/cpuinfo | grep -i avx; then echo Yes; else echo No; fi
**Note:** If you would have a problem running nvidia-docker, you may try the old method we have used (not recommended).
.. code-block:: bash If it doesn't, we will use the non-AVX images.
Above methods work with the GPU image too -- just please don't forget
to install GPU driver. To support GPU driver, we recommend to use
[nvidia-docker](https://github.com/NVIDIA/nvidia-docker). Run using
.. code-block:: bash
nvidia-docker run -it --rm paddledev/paddle:0.10.0rc1-gpu /bin/bash
Note: If you would have a problem running nvidia-docker, you may try the old method we have used (not recommended).
.. code-block:: bash
export CUDA_SO="$(\ls /usr/lib64/libcuda* | xargs -I{} echo '-v {}:{}') $(\ls /usr/lib64/libnvidia* | xargs -I{} echo '-v {}:{}')" export CUDA_SO="$(\ls /usr/lib64/libcuda* | xargs -I{} echo '-v {}:{}') $(\ls /usr/lib64/libnvidia* | xargs -I{} echo '-v {}:{}')"
export DEVICES=$(\ls /dev/nvidia* | xargs -I{} echo '--device {}:{}') export DEVICES=$(\ls /dev/nvidia* | xargs -I{} echo '--device {}:{}')
docker run ${CUDA_SO} ${DEVICES} -it paddledev/paddle:0.10.0rc1-gpu docker run ${CUDA_SO} ${DEVICES} -it paddledev/paddle:<version>-gpu
3. Use production image to release you AI application
Suppose that we have a simple application program in :code:`a.py`, we can test and run it using the production image:
```bash
docker run -it -v $PWD:/work paddle /work/a.py
```
But this works only if all dependencies of :code:`a.py` are in the production image. If this is not the case, we need to build a new Docker image from the production image and with more dependencies installs.
PaddlePaddle Book PaddlePaddle Book
...@@ -71,11 +117,11 @@ PaddlePaddle Book is an interactive Jupyter Notebook for users and developers. ...@@ -71,11 +117,11 @@ PaddlePaddle Book is an interactive Jupyter Notebook for users and developers.
We already exposed port 8888 for this book. If you want to We already exposed port 8888 for this book. If you want to
dig deeper into deep learning, PaddlePaddle Book definitely is your best choice. dig deeper into deep learning, PaddlePaddle Book definitely is your best choice.
Once you are inside the container, simply issue the command: We provide a packaged book image, simply issue the command:
.. code-block:: bash .. code-block:: bash
jupyter notebook docker run -p 8888:8888 paddlepaddle/book
Then, you would back and paste the address into the local browser: Then, you would back and paste the address into the local browser:
...@@ -85,32 +131,6 @@ Then, you would back and paste the address into the local browser: ...@@ -85,32 +131,6 @@ Then, you would back and paste the address into the local browser:
That's all. Enjoy your journey! That's all. Enjoy your journey!
Non-AVX Images
--------------
Please be aware that the CPU-only and the GPU images both use the AVX
instruction set, but old computers produced before 2008 do not support
AVX. The following command checks if your Linux computer supports
AVX:
.. code-block:: bash
if cat /proc/cpuinfo | grep -i avx; then echo Yes; else echo No; fi
If it doesn't, we will need to build non-AVX images manually from
source code:
.. code-block:: bash
cd ~
git clone https://github.com/PaddlePaddle/Paddle.git
cd Paddle
docker build --build-arg WITH_AVX=OFF -t paddle:cpu-noavx -f paddle/scripts/docker/Dockerfile .
docker build --build-arg WITH_AVX=OFF -t paddle:gpu-noavx -f paddle/scripts/docker/Dockerfile.gpu .
Development Using Docker Development Using Docker
------------------------ ------------------------
...@@ -118,22 +138,21 @@ Developers can work on PaddlePaddle using Docker. This allows ...@@ -118,22 +138,21 @@ Developers can work on PaddlePaddle using Docker. This allows
developers to work on different platforms -- Linux, Mac OS X, and developers to work on different platforms -- Linux, Mac OS X, and
Windows -- in a consistent way. Windows -- in a consistent way.
1. Build the Development Environment as a Docker Image 1. Build the Development Docker Image
.. code-block:: bash .. code-block:: bash
git clone --recursive https://github.com/PaddlePaddle/Paddle git clone --recursive https://github.com/PaddlePaddle/Paddle
cd Paddle cd Paddle
docker build -t paddle:dev -f paddle/scripts/docker/Dockerfile . docker build -t paddle:dev .
Note that by default :code:`docker build` wouldn't import source Note that by default :code:`docker build` wouldn't import source
tree into the image and build it. If we want to do that, we need tree into the image and build it. If we want to do that, we need docker the
to set a build arg: development docker image and then run the following command:
.. code-block:: bash .. code-block:: bash
docker build -t paddle:dev -f paddle/scripts/docker/Dockerfile --build-arg BUILD_AND_INSTALL=ON . docker run -v $PWD:/paddle -e "WITH_GPU=OFF" -e "WITH_AVX=ON" -e "TEST=OFF" paddle:dev
2. Run the Development Environment 2. Run the Development Environment
...@@ -144,14 +163,13 @@ Windows -- in a consistent way. ...@@ -144,14 +163,13 @@ Windows -- in a consistent way.
.. code-block:: bash .. code-block:: bash
docker run -d -p 2202:22 -p 8888:8888 -v $PWD:/paddle paddle:dev docker run -d -p 2202:22 -p 8888:8888 -v $PWD:/paddle paddle:dev sshd
This runs a container of the development environment Docker image This runs a container of the development environment Docker image
with the local source tree mounted to :code:`/paddle` of the with the local source tree mounted to :code:`/paddle` of the
container. container.
Note that the default entry-point of :code:`paddle:dev` is The above :code:`docker run` commands actually starts
:code:`sshd`, and above :code:`docker run` commands actually starts
an SSHD server listening on port 2202. This allows us to log into an SSHD server listening on port 2202. This allows us to log into
this container with: this container with:
...@@ -199,7 +217,7 @@ container: ...@@ -199,7 +217,7 @@ container:
.. code-block:: bash .. code-block:: bash
docker run -d --name paddle-cpu-doc paddle:0.10.0rc1-cpu docker run -d --name paddle-cpu-doc paddle:<version>
docker run -d --volumes-from paddle-cpu-doc -p 8088:80 nginx docker run -d --volumes-from paddle-cpu-doc -p 8088:80 nginx
......
...@@ -46,7 +46,6 @@ PaddlePaddle提供了ubuntu 14.04 deb安装包。 ...@@ -46,7 +46,6 @@ PaddlePaddle提供了ubuntu 14.04 deb安装包。
with_double: OFF with_double: OFF
with_python: ON with_python: ON
with_rdma: OFF with_rdma: OFF
with_metric_learning:
with_timer: OFF with_timer: OFF
with_predict_sdk: with_predict_sdk:
......
...@@ -76,8 +76,6 @@ SWIG_LINK_LIBRARIES(swig_paddle ...@@ -76,8 +76,6 @@ SWIG_LINK_LIBRARIES(swig_paddle
${CMAKE_DL_LIBS} ${CMAKE_DL_LIBS}
${EXTERNAL_LIBS} ${EXTERNAL_LIBS}
${CMAKE_THREAD_LIBS_INIT} ${CMAKE_THREAD_LIBS_INIT}
${RDMA_LD_FLAGS}
${RDMA_LIBS}
${START_END} ${START_END}
) )
......
/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#include "Layer.h"
#include "NormLayer.h"
#include "paddle/math/BaseMatrix.h"
#include "paddle/math/Matrix.h"
namespace paddle {
MatrixPtr CrossChannelNormLayer::createSampleMatrix(MatrixPtr data,
size_t iter,
size_t spatialDim) {
return Matrix::create(data->getData() + iter * channels_ * spatialDim,
channels_,
spatialDim,
false,
useGpu_);
}
MatrixPtr CrossChannelNormLayer::createSpatialMatrix(MatrixPtr data,
size_t iter,
size_t spatialDim) {
return Matrix::create(
data->getData() + iter * spatialDim, 1, spatialDim, false, useGpu_);
}
void CrossChannelNormLayer::forward(PassType passType) {
Layer::forward(passType);
MatrixPtr inV = getInputValue(0);
size_t batchSize = inV->getHeight();
size_t dataDim = inV->getWidth();
CHECK_EQ(getSize(), dataDim);
reserveOutput(batchSize, dataDim);
MatrixPtr outV = getOutputValue();
size_t spatialDim = dataDim / channels_;
Matrix::resizeOrCreate(dataBuffer_, batchSize, dataDim, false, useGpu_);
Matrix::resizeOrCreate(spatialBuffer_, 1, spatialDim, false, useGpu_);
Matrix::resizeOrCreate(normBuffer_, batchSize, spatialDim, false, useGpu_);
normBuffer_->zeroMem();
// add eps to avoid overflow
normBuffer_->addScalar(*normBuffer_, 1e-6);
inV->square2(*dataBuffer_);
for (size_t i = 0; i < batchSize; i++) {
const MatrixPtr inVTmp = createSampleMatrix(inV, i, spatialDim);
const MatrixPtr dataTmp = createSampleMatrix(dataBuffer_, i, spatialDim);
MatrixPtr outVTmp = createSampleMatrix(outV, i, spatialDim);
MatrixPtr normTmp = createSpatialMatrix(normBuffer_, i, spatialDim);
// compute norm.
spatialBuffer_->sumCols(*dataTmp, 1, 0);
spatialBuffer_->sqrt2(*spatialBuffer_);
normTmp->copyFrom(*spatialBuffer_);
outVTmp->copyFrom(*inVTmp);
outVTmp->divRowVector(*spatialBuffer_);
// scale the layer.
outVTmp->mulColVector(*scale_->getW());
}
}
void CrossChannelNormLayer::backward(const UpdateCallback& callback) {
MatrixPtr inG = getInputGrad(0);
MatrixPtr inV = getInputValue(0);
MatrixPtr outG = getOutputGrad();
MatrixPtr outV = getOutputValue();
size_t batchSize = inG->getHeight();
size_t dataDim = inG->getWidth();
size_t spatialDim = dataDim / channels_;
dataBuffer_->dotMul(*outG, *outV);
Matrix::resizeOrCreate(scaleDiff_, channels_, 1, false, useGpu_);
Matrix::resizeOrCreate(channelBuffer_, channels_, 1, false, useGpu_);
Matrix::resizeOrCreate(sampleBuffer_, channels_, spatialDim, false, useGpu_);
scaleDiff_->zeroMem();
for (size_t i = 0; i < batchSize; i++) {
MatrixPtr outGTmp = createSampleMatrix(outG, i, spatialDim);
const MatrixPtr dataTmp = createSampleMatrix(dataBuffer_, i, spatialDim);
const MatrixPtr inVTmp = createSampleMatrix(inV, i, spatialDim);
const MatrixPtr inGTmp = createSampleMatrix(inG, i, spatialDim);
const MatrixPtr normTmp = createSpatialMatrix(normBuffer_, i, spatialDim);
channelBuffer_->sumRows(*dataTmp, 1, 0);
channelBuffer_->dotDiv(*channelBuffer_, *(scale_->getW()));
// store a / scale[i] in scaleDiff_ temporary
scaleDiff_->add(*channelBuffer_, 1.);
sampleBuffer_->dotMul(*inVTmp, *outGTmp);
spatialBuffer_->sumCols(*sampleBuffer_, 1., 1.);
// scale the grad
inGTmp->copyFrom(*inVTmp);
inGTmp->mulRowVector(*spatialBuffer_);
// divide by square of norm
spatialBuffer_->dotMul(*normTmp, *normTmp);
inGTmp->divRowVector(*spatialBuffer_);
// subtract
inGTmp->add(*outGTmp, -1, 1);
// divide by norm
inGTmp->divRowVector(*normTmp);
// scale the diff
inGTmp->mulColVector(*scale_->getW());
}
// updata scale
if (scale_->getWGrad()) scale_->getWGrad()->copyFrom(*scaleDiff_);
scale_->getParameterPtr()->incUpdate(callback);
}
} // namespace paddle
...@@ -26,6 +26,8 @@ Layer* NormLayer::create(const LayerConfig& config) { ...@@ -26,6 +26,8 @@ Layer* NormLayer::create(const LayerConfig& config) {
return new ResponseNormLayer(config); return new ResponseNormLayer(config);
} else if (norm == "cmrnorm-projection") { } else if (norm == "cmrnorm-projection") {
return new CMRProjectionNormLayer(config); return new CMRProjectionNormLayer(config);
} else if (norm == "cross-channel-norm") {
return new CrossChannelNormLayer(config);
} else { } else {
LOG(FATAL) << "Unknown norm type: " << norm; LOG(FATAL) << "Unknown norm type: " << norm;
return nullptr; return nullptr;
...@@ -54,4 +56,14 @@ bool ResponseNormLayer::init(const LayerMap& layerMap, ...@@ -54,4 +56,14 @@ bool ResponseNormLayer::init(const LayerMap& layerMap,
return true; return true;
} }
bool CrossChannelNormLayer::init(const LayerMap& layerMap,
const ParameterMap& parameterMap) {
Layer::init(layerMap, parameterMap);
CHECK(parameters_[0]);
const NormConfig& conf = config_.inputs(0).norm_conf();
channels_ = conf.channels();
scale_.reset(new Weight(channels_, 1, parameters_[0]));
return true;
}
} // namespace paddle } // namespace paddle
...@@ -65,4 +65,35 @@ public: ...@@ -65,4 +65,35 @@ public:
} }
}; };
/**
* This layer applys normalization across the channels of each sample to a
* conv layer's output, and scales the output by a group of trainable factors
* whose dimensions equal to the number of channels.
* - Input: One and only one input layer are accepted.
* - Output: The normalized data of the input data.
* Reference:
* Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed,
* Cheng-Yang Fu, Alexander C. Berg. SSD: Single Shot MultiBox Detector
*/
class CrossChannelNormLayer : public NormLayer {
public:
explicit CrossChannelNormLayer(const LayerConfig& config)
: NormLayer(config) {}
bool init(const LayerMap& layerMap, const ParameterMap& parameterMap);
void forward(PassType passType);
void backward(const UpdateCallback& callback);
MatrixPtr createSampleMatrix(MatrixPtr data, size_t iter, size_t spatialDim);
MatrixPtr createSpatialMatrix(MatrixPtr data, size_t iter, size_t spatialDim);
protected:
size_t channels_;
std::unique_ptr<Weight> scale_;
MatrixPtr scaleDiff_;
MatrixPtr normBuffer_;
MatrixPtr dataBuffer_;
MatrixPtr channelBuffer_;
MatrixPtr spatialBuffer_;
MatrixPtr sampleBuffer_;
};
} // namespace paddle } // namespace paddle
...@@ -45,27 +45,32 @@ protected: ...@@ -45,27 +45,32 @@ protected:
MatrixPtr buffer_; MatrixPtr buffer_;
}; };
REGISTER_LAYER(priorbox, PriorBoxLayer);
bool PriorBoxLayer::init(const LayerMap& layerMap, bool PriorBoxLayer::init(const LayerMap& layerMap,
const ParameterMap& parameterMap) { const ParameterMap& parameterMap) {
Layer::init(layerMap, parameterMap); Layer::init(layerMap, parameterMap);
auto pbConf = config_.inputs(0).priorbox_conf(); auto pbConf = config_.inputs(0).priorbox_conf();
std::vector<real> tmp;
aspectRatio_.push_back(1.);
std::copy(pbConf.min_size().begin(), std::copy(pbConf.min_size().begin(),
pbConf.min_size().end(), pbConf.min_size().end(),
std::back_inserter(minSize_)); std::back_inserter(minSize_));
std::copy(pbConf.max_size().begin(), std::copy(pbConf.max_size().begin(),
pbConf.max_size().end(), pbConf.max_size().end(),
std::back_inserter(maxSize_)); std::back_inserter(maxSize_));
std::copy(pbConf.aspect_ratio().begin(),
pbConf.aspect_ratio().end(),
std::back_inserter(aspectRatio_));
std::copy(pbConf.variance().begin(), std::copy(pbConf.variance().begin(),
pbConf.variance().end(), pbConf.variance().end(),
std::back_inserter(variance_)); std::back_inserter(variance_));
std::copy(pbConf.aspect_ratio().begin(),
pbConf.aspect_ratio().end(),
std::back_inserter(tmp));
// flip // flip
int inputRatioLength = aspectRatio_.size(); int inputRatioLength = tmp.size();
for (int index = 0; index < inputRatioLength; index++) for (int index = 0; index < inputRatioLength; index++) {
aspectRatio_.push_back(1 / aspectRatio_[index]); aspectRatio_.push_back(tmp[index]);
aspectRatio_.push_back(1.); aspectRatio_.push_back(1 / tmp[index]);
}
numPriors_ = aspectRatio_.size(); numPriors_ = aspectRatio_.size();
if (maxSize_.size() > 0) numPriors_++; if (maxSize_.size() > 0) numPriors_++;
return true; return true;
...@@ -94,12 +99,12 @@ void PriorBoxLayer::forward(PassType passType) { ...@@ -94,12 +99,12 @@ void PriorBoxLayer::forward(PassType passType) {
for (int w = 0; w < layerWidth; ++w) { for (int w = 0; w < layerWidth; ++w) {
real centerX = (w + 0.5) * stepW; real centerX = (w + 0.5) * stepW;
real centerY = (h + 0.5) * stepH; real centerY = (h + 0.5) * stepH;
int minSize = 0; real minSize = 0;
for (size_t s = 0; s < minSize_.size(); s++) { for (size_t s = 0; s < minSize_.size(); s++) {
// first prior. // first prior.
minSize = minSize_[s]; minSize = minSize_[s];
int boxWidth = minSize; real boxWidth = minSize;
int boxHeight = minSize; real boxHeight = minSize;
// xmin, ymin, xmax, ymax. // xmin, ymin, xmax, ymax.
tmpPtr[idx++] = (centerX - boxWidth / 2.) / imageWidth; tmpPtr[idx++] = (centerX - boxWidth / 2.) / imageWidth;
tmpPtr[idx++] = (centerY - boxHeight / 2.) / imageHeight; tmpPtr[idx++] = (centerY - boxHeight / 2.) / imageHeight;
...@@ -112,7 +117,7 @@ void PriorBoxLayer::forward(PassType passType) { ...@@ -112,7 +117,7 @@ void PriorBoxLayer::forward(PassType passType) {
CHECK_EQ(minSize_.size(), maxSize_.size()); CHECK_EQ(minSize_.size(), maxSize_.size());
// second prior. // second prior.
for (size_t s = 0; s < maxSize_.size(); s++) { for (size_t s = 0; s < maxSize_.size(); s++) {
int maxSize = maxSize_[s]; real maxSize = maxSize_[s];
boxWidth = boxHeight = sqrt(minSize * maxSize); boxWidth = boxHeight = sqrt(minSize * maxSize);
tmpPtr[idx++] = (centerX - boxWidth / 2.) / imageWidth; tmpPtr[idx++] = (centerX - boxWidth / 2.) / imageWidth;
tmpPtr[idx++] = (centerY - boxHeight / 2.) / imageHeight; tmpPtr[idx++] = (centerY - boxHeight / 2.) / imageHeight;
...@@ -145,6 +150,5 @@ void PriorBoxLayer::forward(PassType passType) { ...@@ -145,6 +150,5 @@ void PriorBoxLayer::forward(PassType passType) {
MatrixPtr outV = getOutputValue(); MatrixPtr outV = getOutputValue();
outV->copyFrom(buffer_->data_, dim * 2); outV->copyFrom(buffer_->data_, dim * 2);
} }
REGISTER_LAYER(priorbox, PriorBoxLayer);
} // namespace paddle } // namespace paddle
...@@ -1642,6 +1642,25 @@ TEST(Layer, PadLayer) { ...@@ -1642,6 +1642,25 @@ TEST(Layer, PadLayer) {
} }
} }
TEST(Layer, CrossChannelNormLayer) {
TestConfig config;
config.layerConfig.set_type("norm");
config.layerConfig.set_size(100);
LayerInputConfig* input = config.layerConfig.add_inputs();
NormConfig* norm = input->mutable_norm_conf();
norm->set_norm_type("cross-channel-norm");
norm->set_channels(10);
norm->set_size(100);
norm->set_scale(0);
norm->set_pow(0);
norm->set_blocked(0);
config.inputDefs.push_back({INPUT_DATA, "layer_0", 100, 10});
for (auto useGpu : {false, true}) {
testLayerGrad(config, "cross-channel-norm", 10, false, useGpu, false, 5);
}
}
TEST(Layer, smooth_l1) { TEST(Layer, smooth_l1) {
TestConfig config; TestConfig config;
config.layerConfig.set_type("smooth_l1"); config.layerConfig.set_type("smooth_l1");
......
...@@ -1453,6 +1453,24 @@ void BaseMatrixT<T>::divRowVector(BaseMatrixT& b) { ...@@ -1453,6 +1453,24 @@ void BaseMatrixT<T>::divRowVector(BaseMatrixT& b) {
true_type() /* bAsRowVector */, false_type()); true_type() /* bAsRowVector */, false_type());
} }
template<class T>
void BaseMatrixT<T>::mulColVector(BaseMatrixT& b) {
MatrixOffset offset(0, 0, 0, 0);
int numRows = height_;
int numCols = width_;
applyBinary(binary::DotMul<T>(), b, numRows, numCols, offset,
false_type(), true_type() /* bAsColVector */);
}
template<class T>
void BaseMatrixT<T>::divColVector(BaseMatrixT& b) {
MatrixOffset offset(0, 0, 0, 0);
int numRows = height_;
int numCols = width_;
applyBinary(binary::DotDiv<T>(), b, numRows, numCols, offset,
false_type(), true_type() /* bAsColVector */);
}
template<> template<>
template <class Agg> template <class Agg>
int BaseMatrixT<real>::applyRow(Agg agg, BaseMatrixT& b) { int BaseMatrixT<real>::applyRow(Agg agg, BaseMatrixT& b) {
......
...@@ -545,6 +545,9 @@ public: ...@@ -545,6 +545,9 @@ public:
void mulRowVector(BaseMatrixT& b); void mulRowVector(BaseMatrixT& b);
void divRowVector(BaseMatrixT& b); void divRowVector(BaseMatrixT& b);
void mulColVector(BaseMatrixT& b);
void divColVector(BaseMatrixT& b);
void addP2P(BaseMatrixT& b); void addP2P(BaseMatrixT& b);
/** /**
......
...@@ -110,6 +110,8 @@ TEST(BaseMatrix, BaseMatrix) { ...@@ -110,6 +110,8 @@ TEST(BaseMatrix, BaseMatrix) {
compare(&BaseMatrix::addRowVector); compare(&BaseMatrix::addRowVector);
compare(&BaseMatrix::mulRowVector); compare(&BaseMatrix::mulRowVector);
compare(&BaseMatrix::divRowVector); compare(&BaseMatrix::divRowVector);
compare(&BaseMatrix::mulColVector);
compare(&BaseMatrix::divColVector);
compare(&BaseMatrix::addP2P); compare(&BaseMatrix::addP2P);
compare(&BaseMatrix::invSqrt); compare(&BaseMatrix::invSqrt);
} }
......
...@@ -94,7 +94,7 @@ docker build -t paddle:dev --build-arg UBUNTU_MIRROR=mirror://mirrors.ubuntu.com ...@@ -94,7 +94,7 @@ docker build -t paddle:dev --build-arg UBUNTU_MIRROR=mirror://mirrors.ubuntu.com
Given the development image `paddle:dev`, the following command builds PaddlePaddle from the source tree on the development computer (host): Given the development image `paddle:dev`, the following command builds PaddlePaddle from the source tree on the development computer (host):
```bash ```bash
docker run -v $PWD:/paddle -e "WITH_GPU=OFF" -e "WITH_AVX=ON" -e "TEST=OFF" paddle:dev docker run -v $PWD:/paddle -e "WITH_GPU=OFF" -e "WITH_AVX=ON" -e "WITH_TEST=OFF" -e "RUN_TEST=OFF" paddle:dev
``` ```
This command mounts the source directory on the host into `/paddle` in the container, so the default entry point of `paddle:dev`, `build.sh`, could build the source code with possible local changes. When it writes to `/paddle/build` in the container, it writes to `$PWD/build` on the host indeed. This command mounts the source directory on the host into `/paddle` in the container, so the default entry point of `paddle:dev`, `build.sh`, could build the source code with possible local changes. When it writes to `/paddle/build` in the container, it writes to `$PWD/build` on the host indeed.
...@@ -108,7 +108,11 @@ This command mounts the source directory on the host into `/paddle` in the conta ...@@ -108,7 +108,11 @@ This command mounts the source directory on the host into `/paddle` in the conta
Users can specify the following Docker build arguments with either "ON" or "OFF" value: Users can specify the following Docker build arguments with either "ON" or "OFF" value:
- `WITH_GPU`: ***Required***. Generates NVIDIA CUDA GPU code and relies on CUDA libraries. - `WITH_GPU`: ***Required***. Generates NVIDIA CUDA GPU code and relies on CUDA libraries.
- `WITH_AVX`: ***Required***. Set to "OFF" prevents from generating AVX instructions. If you don't know what is AVX, you might want to set "ON". - `WITH_AVX`: ***Required***. Set to "OFF" prevents from generating AVX instructions. If you don't know what is AVX, you might want to set "ON".
- `TEST`: ***Optional, default OFF***. Build unit tests and run them after building. - `WITH_TEST`: ***Optional, default OFF***. Build unit tests binaries. Once you've built the unit tests, you can run these test manually by the following command:
```bash
docker run -v $PWD:/paddle -e "WITH_GPU=OFF" -e "WITH_AVX=ON" paddle:dev sh -c "cd /paddle/build; make coverall"
```
- `RUN_TEST`: ***Optional, default OFF***. Run unit tests after building. You can't run unit tests without building it.
### Build the Production Docker Image ### Build the Production Docker Image
......
...@@ -32,10 +32,10 @@ cmake .. \ ...@@ -32,10 +32,10 @@ cmake .. \
-DWITH_SWIG_PY=ON \ -DWITH_SWIG_PY=ON \
-DCUDNN_ROOT=/usr/ \ -DCUDNN_ROOT=/usr/ \
-DWITH_STYLE_CHECK=${WITH_STYLE_CHECK:-OFF} \ -DWITH_STYLE_CHECK=${WITH_STYLE_CHECK:-OFF} \
-DWITH_COVERAGE=${TEST:-OFF} \ -DON_COVERALLS=${WITH_TEST:-OFF} \
-DCMAKE_EXPORT_COMPILE_COMMANDS=ON -DCMAKE_EXPORT_COMPILE_COMMANDS=ON
make -j `nproc` make -j `nproc`
if [[ ${TEST:-OFF} == "ON" ]]; then if [[ ${RUN_TEST:-OFF} == "ON" ]]; then
make coveralls make coveralls
fi fi
make install make install
......
...@@ -21,7 +21,6 @@ function version(){ ...@@ -21,7 +21,6 @@ function version(){
echo " with_double: @WITH_DOUBLE@" echo " with_double: @WITH_DOUBLE@"
echo " with_python: @WITH_PYTHON@" echo " with_python: @WITH_PYTHON@"
echo " with_rdma: @WITH_RDMA@" echo " with_rdma: @WITH_RDMA@"
echo " with_metric_learning: @WITH_METRIC@"
echo " with_timer: @WITH_TIMER@" echo " with_timer: @WITH_TIMER@"
echo " with_predict_sdk: @WITH_PREDICT_SDK@" echo " with_predict_sdk: @WITH_PREDICT_SDK@"
} }
......
...@@ -1220,9 +1220,11 @@ def parse_image(image, input_layer_name, image_conf): ...@@ -1220,9 +1220,11 @@ def parse_image(image, input_layer_name, image_conf):
def parse_norm(norm, input_layer_name, norm_conf): def parse_norm(norm, input_layer_name, norm_conf):
norm_conf.norm_type = norm.norm_type norm_conf.norm_type = norm.norm_type
config_assert(norm.norm_type in ['rnorm', 'cmrnorm-projection'], config_assert(
"norm-type %s is not in [rnorm, 'cmrnorm-projection']" % norm.norm_type in
norm.norm_type) ['rnorm', 'cmrnorm-projection', 'cross-channel-norm'],
"norm-type %s is not in [rnorm, cmrnorm-projection, cross-channel-norm]"
% norm.norm_type)
norm_conf.channels = norm.channels norm_conf.channels = norm.channels
norm_conf.size = norm.size norm_conf.size = norm.size
norm_conf.scale = norm.scale norm_conf.scale = norm.scale
...@@ -1898,6 +1900,9 @@ class NormLayer(LayerBase): ...@@ -1898,6 +1900,9 @@ class NormLayer(LayerBase):
norm_conf) norm_conf)
self.set_cnn_layer(name, norm_conf.output_y, norm_conf.output_x, self.set_cnn_layer(name, norm_conf.output_y, norm_conf.output_x,
norm_conf.channels, False) norm_conf.channels, False)
if norm_conf.norm_type == "cross-channel-norm":
self.create_input_parameter(0, norm_conf.channels,
[norm_conf.channels, 1])
@config_layer('pool') @config_layer('pool')
......
...@@ -112,6 +112,7 @@ __all__ = [ ...@@ -112,6 +112,7 @@ __all__ = [
'out_prod_layer', 'out_prod_layer',
'print_layer', 'print_layer',
'priorbox_layer', 'priorbox_layer',
'cross_channel_norm_layer',
'spp_layer', 'spp_layer',
'pad_layer', 'pad_layer',
'eos_layer', 'eos_layer',
...@@ -1008,6 +1009,46 @@ def priorbox_layer(input, ...@@ -1008,6 +1009,46 @@ def priorbox_layer(input,
size=size) size=size)
@wrap_name_default("cross_channel_norm")
def cross_channel_norm_layer(input, name=None, param_attr=None):
"""
Normalize a layer's output. This layer is necessary for ssd.
This layer applys normalize across the channels of each sample to
a conv layer's output and scale the output by a group of trainable
factors which dimensions equal to the channel's number.
:param name: The Layer Name.
:type name: basestring
:param input: The input layer.
:type input: LayerOutput
:param param_attr: The Parameter Attribute|list.
:type param_attr: ParameterAttribute
:return: LayerOutput
"""
assert input.num_filters is not None
Layer(
name=name,
type=LayerType.NORM_LAYER,
inputs=[
Input(
input.name,
norm=Norm(
norm_type="cross-channel-norm",
channels=input.num_filters,
size=input.size,
scale=0,
pow=0,
blocked=0),
**param_attr.attr)
])
return LayerOutput(
name,
LayerType.NORM_LAYER,
parents=input,
num_filters=input.num_filters,
size=input.size)
@wrap_name_default("seq_pooling") @wrap_name_default("seq_pooling")
@wrap_bias_attr_default(has_bias=False) @wrap_bias_attr_default(has_bias=False)
@wrap_param_default(['pooling_type'], default_factory=lambda _: MaxPooling()) @wrap_param_default(['pooling_type'], default_factory=lambda _: MaxPooling())
......
...@@ -20,7 +20,7 @@ TODO(yuyang18): Complete the comments. ...@@ -20,7 +20,7 @@ TODO(yuyang18): Complete the comments.
import cPickle import cPickle
import itertools import itertools
import numpy import numpy
import paddle.v2.dataset.common from common import download
import tarfile import tarfile
__all__ = ['train100', 'test100', 'train10', 'test10'] __all__ = ['train100', 'test100', 'train10', 'test10']
...@@ -55,23 +55,23 @@ def reader_creator(filename, sub_name): ...@@ -55,23 +55,23 @@ def reader_creator(filename, sub_name):
def train100(): def train100():
return reader_creator( return reader_creator(
paddle.v2.dataset.common.download(CIFAR100_URL, 'cifar', CIFAR100_MD5), download(CIFAR100_URL, 'cifar', CIFAR100_MD5), 'train')
'train')
def test100(): def test100():
return reader_creator( return reader_creator(download(CIFAR100_URL, 'cifar', CIFAR100_MD5), 'test')
paddle.v2.dataset.common.download(CIFAR100_URL, 'cifar', CIFAR100_MD5),
'test')
def train10(): def train10():
return reader_creator( return reader_creator(
paddle.v2.dataset.common.download(CIFAR10_URL, 'cifar', CIFAR10_MD5), download(CIFAR10_URL, 'cifar', CIFAR10_MD5), 'data_batch')
'data_batch')
def test10(): def test10():
return reader_creator( return reader_creator(
paddle.v2.dataset.common.download(CIFAR10_URL, 'cifar', CIFAR10_MD5), download(CIFAR10_URL, 'cifar', CIFAR10_MD5), 'test_batch')
'test_batch')
def fetch():
download(CIFAR10_URL, 'cifar', CIFAR10_MD5)
download(CIFAR100_URL, 'cifar', CIFAR100_MD5)
...@@ -17,6 +17,8 @@ import hashlib ...@@ -17,6 +17,8 @@ import hashlib
import os import os
import shutil import shutil
import sys import sys
import importlib
import paddle.v2.dataset
__all__ = ['DATA_HOME', 'download', 'md5file'] __all__ = ['DATA_HOME', 'download', 'md5file']
...@@ -69,3 +71,13 @@ def dict_add(a_dict, ele): ...@@ -69,3 +71,13 @@ def dict_add(a_dict, ele):
a_dict[ele] += 1 a_dict[ele] += 1
else: else:
a_dict[ele] = 1 a_dict[ele] = 1
def fetch_all():
for module_name in filter(lambda x: not x.startswith("__"),
dir(paddle.v2.dataset)):
if "fetch" in dir(
importlib.import_module("paddle.v2.dataset.%s" % module_name)):
getattr(
importlib.import_module("paddle.v2.dataset.%s" % module_name),
"fetch")()
...@@ -196,3 +196,11 @@ def test(): ...@@ -196,3 +196,11 @@ def test():
words_name='conll05st-release/test.wsj/words/test.wsj.words.gz', words_name='conll05st-release/test.wsj/words/test.wsj.words.gz',
props_name='conll05st-release/test.wsj/props/test.wsj.props.gz') props_name='conll05st-release/test.wsj/props/test.wsj.props.gz')
return reader_creator(reader, word_dict, verb_dict, label_dict) return reader_creator(reader, word_dict, verb_dict, label_dict)
def fetch():
download(WORDDICT_URL, 'conll05st', WORDDICT_MD5)
download(VERBDICT_URL, 'conll05st', VERBDICT_MD5)
download(TRGDICT_URL, 'conll05st', TRGDICT_MD5)
download(EMB_URL, 'conll05st', EMB_MD5)
download(DATA_URL, 'conll05st', DATA_MD5)
...@@ -123,3 +123,7 @@ def test(word_idx): ...@@ -123,3 +123,7 @@ def test(word_idx):
def word_dict(): def word_dict():
return build_dict( return build_dict(
re.compile("aclImdb/((train)|(test))/((pos)|(neg))/.*\.txt$"), 150) re.compile("aclImdb/((train)|(test))/((pos)|(neg))/.*\.txt$"), 150)
def fetch():
paddle.v2.dataset.common.download(URL, 'imdb', MD5)
...@@ -89,3 +89,7 @@ def train(word_idx, n): ...@@ -89,3 +89,7 @@ def train(word_idx, n):
def test(word_idx, n): def test(word_idx, n):
return reader_creator('./simple-examples/data/ptb.valid.txt', word_idx, n) return reader_creator('./simple-examples/data/ptb.valid.txt', word_idx, n)
def fetch():
paddle.v2.dataset.common.download(URL, "imikolov", MD5)
...@@ -106,3 +106,10 @@ def test(): ...@@ -106,3 +106,10 @@ def test():
TEST_IMAGE_MD5), TEST_IMAGE_MD5),
paddle.v2.dataset.common.download(TEST_LABEL_URL, 'mnist', paddle.v2.dataset.common.download(TEST_LABEL_URL, 'mnist',
TEST_LABEL_MD5), 100) TEST_LABEL_MD5), 100)
def fetch():
paddle.v2.dataset.common.download(TRAIN_IMAGE_URL, 'mnist', TRAIN_IMAGE_MD5)
paddle.v2.dataset.common.download(TRAIN_LABEL_URL, 'mnist', TRAIN_LABEL_MD5)
paddle.v2.dataset.common.download(TEST_IMAGE_URL, 'mnist', TEST_IMAGE_MD5)
paddle.v2.dataset.common.download(TEST_LABEL_URL, 'mnist', TRAIN_LABEL_MD5)
...@@ -30,6 +30,9 @@ __all__ = [ ...@@ -30,6 +30,9 @@ __all__ = [
age_table = [1, 18, 25, 35, 45, 50, 56] age_table = [1, 18, 25, 35, 45, 50, 56]
URL = 'http://files.grouplens.org/datasets/movielens/ml-1m.zip'
MD5 = 'c4d9eecfca2ab87c1945afe126590906'
class MovieInfo(object): class MovieInfo(object):
def __init__(self, index, categories, title): def __init__(self, index, categories, title):
...@@ -77,10 +80,7 @@ USER_INFO = None ...@@ -77,10 +80,7 @@ USER_INFO = None
def __initialize_meta_info__(): def __initialize_meta_info__():
fn = download( fn = download(URL, "movielens", MD5)
url='http://files.grouplens.org/datasets/movielens/ml-1m.zip',
module_name='movielens',
md5sum='c4d9eecfca2ab87c1945afe126590906')
global MOVIE_INFO global MOVIE_INFO
if MOVIE_INFO is None: if MOVIE_INFO is None:
pattern = re.compile(r'^(.*)\((\d+)\)$') pattern = re.compile(r'^(.*)\((\d+)\)$')
...@@ -205,5 +205,9 @@ def unittest(): ...@@ -205,5 +205,9 @@ def unittest():
print train_count, test_count print train_count, test_count
def fetch():
download(URL, "movielens", MD5)
if __name__ == '__main__': if __name__ == '__main__':
unittest() unittest()
...@@ -125,3 +125,7 @@ def test(): ...@@ -125,3 +125,7 @@ def test():
""" """
data_set = load_sentiment_data() data_set = load_sentiment_data()
return reader_creator(data_set[NUM_TRAINING_INSTANCES:]) return reader_creator(data_set[NUM_TRAINING_INSTANCES:])
def fetch():
nltk.download('movie_reviews', download_dir=common.DATA_HOME)
...@@ -89,3 +89,7 @@ def test(): ...@@ -89,3 +89,7 @@ def test():
yield d[:-1], d[-1:] yield d[:-1], d[-1:]
return reader return reader
def fetch():
download(URL, 'uci_housing', MD5)
...@@ -16,7 +16,7 @@ wmt14 dataset ...@@ -16,7 +16,7 @@ wmt14 dataset
""" """
import tarfile import tarfile
import paddle.v2.dataset.common from paddle.v2.dataset.common import download
__all__ = ['train', 'test', 'build_dict'] __all__ = ['train', 'test', 'build_dict']
...@@ -95,11 +95,13 @@ def reader_creator(tar_file, file_name, dict_size): ...@@ -95,11 +95,13 @@ def reader_creator(tar_file, file_name, dict_size):
def train(dict_size): def train(dict_size):
return reader_creator( return reader_creator(
paddle.v2.dataset.common.download(URL_TRAIN, 'wmt14', MD5_TRAIN), download(URL_TRAIN, 'wmt14', MD5_TRAIN), 'train/train', dict_size)
'train/train', dict_size)
def test(dict_size): def test(dict_size):
return reader_creator( return reader_creator(
paddle.v2.dataset.common.download(URL_TRAIN, 'wmt14', MD5_TRAIN), download(URL_TRAIN, 'wmt14', MD5_TRAIN), 'test/test', dict_size)
'test/test', dict_size)
def fetch():
download(URL_TRAIN, 'wmt14', MD5_TRAIN)
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册