Build process re-designed
Created by: wangkuiyi
We need to complete the initial draft https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/scripts/docker/README.md.
I am recording some ideas here, and we should file a PR later.
Current Status
Currently, we have four sets of Dockefiles:
-
Kubernetes examples:
doc/howto/usage/k8s/src/Dockerfile -- based on released image but add start.sh doc/howto/usage/k8s/src/k8s_data/Dockerfile -- contains only get_data.sh doc/howto/usage/k8s/src/k8s_train/Dockerfile -- this duplicates with the first one.
-
Generate .deb packages:
paddle/scripts/deb/build_scripts/Dockerfile -- significantly overlaps with the `docker` directory
-
In the
docker
directory:paddle/scripts/docker/Dockerfile paddle/scripts/docker/Dockerfile.gpu
-
Document building
paddle/scripts/tools/build_docs/Dockerfile -- a subset of above two sets.
Goal
We want two Docker images for each version of PaddlePaddle:
-
paddle:<version>-dev
This a development image contains only the development tools. This standardizes the building tools and procedure. Users include:
- developers -- no longer need to install development tools on the host, and can build their current work on the host (development computer).
- release engineers -- use this to build the official release from certain branch/tag on Github.com.
- document writers / Website developers -- Our documents are in the source repo in the form of .md/.rst files and comments in source code. We need tools to extract the information, typeset, and generate Web pages.
So the development image must contain not only source code building tools, but also documentation tools:
- gcc/clang
- nvcc
- Python
- sphinx
- woboq
- sshd
where
sshd
makes it easy for developers to have multiple terminals connecting into the container. -
paddle:<version>
This is the production image, generated using the development image. This image might have multiple variants:
- GPU/AVX
paddle:<version>-gpu
- GPU/no-AVX
paddle:<version>-gpu-noavx
- no-GPU/AVX
paddle:<version>
- no-GPU/no-AVX
paddle:<version>-noavx
We'd like to give users choices of GPU and no-GPU, because the GPU version image is much larger than then the no-GPU version.
We'd like to give users choices of AVX and no-AVX, because some cloud providers don't provide AVX-enabled VMs.
- GPU/AVX
Dockerfile
To realize above goals, we need only one Dockerfile for the development image. We can put it in the root source directory.
Let us go over our daily development procedure to show how developers can use this file.
-
Check out the source code
git clone https://github.com/PaddlePaddle/Paddle paddle
-
Do something
cd paddle git checkout -b my_work Edit some files
-
Build/update the development image (if not yet)
docker build -t paddle:dev . # Suppose that the Dockerfile is in the root source directory.
-
Build the source code
docker run -v $PWD:/paddle -e "GPU=OFF" -e "AVX=ON" -e "TEST=ON" paddle:dev
This command maps the source directory on the host into
/paddle
in the container.Please be aware that the default entrypoint of
paddle:dev
is a shell script filebuild.sh
, which builds the source code, and outputs to/paddle/build
in the container, which is actually$PWD/build
on the host.build.sh
doesn't only build binaries, but also generates a$PWD/build/Dockerfile
file, which can be used to build the production image. We will talk about it later. -
Run on the host (Not recommended)
If the host computer happens to have all dependent libraries and Python runtimes installed, we can now run/test the built program. But the recommended way is to running in a production image.
-
Run in the development container
build.sh
generates binary files and invokesmake install
. So we can run the built program within the development container. This is convenient for developers. -
Build a production image
On the host, we can use the
$PWD/build/Dockerfile
to generate a production image.docker build -t paddle --build-arg "BOOK=ON" -f build/Dockerfile .
-
Run the Paddle Book
Once we have the production image, we can run Paddle Book chapters in Jupyter Notebooks (if we chose to build them)
docker run -it paddle
Note that the default entrypoint of the production image starts Jupyter server, if we chose to build Paddle Book.
-
Run on Kubernetes
We can push the production image to a DockerHub server, so developers can run distributed training jobs on the Kuberentes cluster:
docker tag paddle me/paddle docker push kubectl ...
For end users, we will provide more convinient tools to run distributed jobs.