提交 94e0c6bb 编写于 作者: A Austin Anderson

Add new Dockerfile assembler based on partials

This change adds a new suite of TensorFlow dockerfiles. The
dockerfiles come from an assembler controlled by a yaml spec, and are
based on a set of re-usable partial dockerfiles.

The assembler and spec include conveniences like spec validation,
references to other images and specs for minimizing repetition, and arg
expansion.
上级 1e0e804c
# WARNING: THESE IMAGES ARE DEPRECATED.
TensorFlow's Dockerfiles are now located in
[`tensorflow/tools/dockerfiles/`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/dockerfiles).
This directory will eventually be removed.
# Using TensorFlow via Docker
This directory contains `Dockerfile`s to make it easy to get up and running with
......
FROM hadolint/hadolint:latest-debian
LABEL maintainer="Austin Anderson <angerson@google.com>"
RUN apt-get update && apt-get install -y python3 python3-pip bash
RUN pip3 install --upgrade pip setuptools pyyaml absl-py cerberus
WORKDIR /tf
VOLUME ["/tf"]
COPY bashrc /etc/bash.bashrc
RUN chmod 777 /etc/bash.bashrc
# TensorFlow Dockerfiles
This directory houses TensorFlow's Dockerfiles. **DO NOT EDIT THE DOCKERFILES
MANUALLY!** They are maintained by `assembler.py`, which builds Dockerfiles from
the files in `partials/` and the rules in `spec.yml`. See [the Maintaining
section](#maintaining) for more information.
## Building
The Dockerfiles in the `dockerfiles` directory must have their build context set
to **the directory with this README.md** to copy in helper files. For example:
```bash
$ docker build -f ./dockerfiles/cpu.Dockerfile -t tf-cpu .
```
Each Dockerfile has its own set of available `--build-arg`s which are documented
in the Dockerfile itself.
## Maintaining
To make changes to TensorFlow's Dockerfiles, you'll update `spec.yml` and the
`*.partial.Dockerfile` files in the `partials` directory, then run
`assembler.py` to re-generate the full Dockerfiles before creating a pull
request.
You can use the `Dockerfile` in this directory to build an editing environment
that has all of the Python dependencies you'll need:
```bash
$ docker build -t tf-assembler .
# Set --user to set correct permissions on generated files
$ docker run --user $(id -u):$(id -g) -it -v $(pwd):/tf tf-assembler bash
# In the container...
/tf $ python3 ./assembler.py -o dockerfiles -s spec.yml --validate
```
"""Assemble common TF Dockerfiles from many parts.
TODO(angerson): DO NOT SUBMIT without a detailed description of assembler.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import copy
import errno
import os
import os.path
import re
import shutil
import textwrap
from absl import app
from absl import flags
import cerberus
import yaml
FLAGS = flags.FLAGS
flags.DEFINE_boolean(
'dry_run', False, 'Do not actually generate Dockerfiles', short_name='n')
flags.DEFINE_string(
'spec_file',
'./spec.yml',
'Path to a YAML specification file',
short_name='s')
flags.DEFINE_string(
'output_dir',
'.', ('Path to an output directory for Dockerfiles. '
'Will be created if it doesn\'t exist.'),
short_name='o')
flags.DEFINE_string(
'partial_dir',
'./partials',
'Path to a directory containing foo.partial.Dockerfile partial files.',
short_name='p')
flags.DEFINE_boolean(
'quiet_dry_run',
True,
'Do not print contents of dry run Dockerfiles.',
short_name='q')
flags.DEFINE_boolean(
'validate', True, 'Validate generated Dockerfiles', short_name='c')
# Schema to verify the contents of spec.yml with Cerberus.
# Must be converted to a dict from yaml to work.
# Note: can add python references with e.g.
# !!python/name:builtins.str
# !!python/name:__main__.funcname
SCHEMA_TEXT = """
header:
type: string
partials:
type: dict
keyschema:
type: string
valueschema:
type: dict
schema:
desc:
type: string
args:
type: dict
keyschema:
type: string
valueschema:
anyof:
- type: [ boolean, number, string ]
- type: dict
schema:
default:
type: [ boolean, number, string ]
desc:
type: string
options:
type: list
schema:
type: string
images:
keyschema:
type: string
valueschema:
type: dict
schema:
desc:
type: string
arg-defaults:
type: list
schema:
anyof:
- type: dict
keyschema:
type: string
arg_in_use: true
valueschema:
type: string
- type: string
isimage: true
create-dockerfile:
type: boolean
partials:
type: list
schema:
anyof:
- type: dict
keyschema:
type: string
regex: image
valueschema:
type: string
isimage: true
- type: string
ispartial: true
"""
class TfDockerValidator(cerberus.Validator):
"""Custom Cerberus validator for TF dockerfile spec.
Note that each custom validator's docstring must end with a segment describing
its own validation schema.
"""
def _validate_ispartial(self, ispartial, field, value):
"""Validate that a partial references an existing partial spec.
Args:
ispartial: Value of the rule, a bool
field: The field being validated
value: The field's value
The rule's arguments are validated against this schema:
{'type': 'boolean'}
"""
if ispartial and value not in self.root_document.get('partials', dict()):
self._error(field, '{} is not an existing partial.'.format(value))
def _validate_isimage(self, isimage, field, value):
"""Validate that an image references an existing partial spec.
Args:
isimage: Value of the rule, a bool
field: The field being validated
value: The field's value
The rule's arguments are validated against this schema:
{'type': 'boolean'}
"""
if isimage and value not in self.root_document.get('images', dict()):
self._error(field, '{} is not an existing image.'.format(value))
def _validate_arg_in_use(self, arg_in_use, field, value):
"""Validate that an arg references an existing partial spec's args.
Args:
arg_in_use: Value of the rule, a bool
field: The field being validated
value: The field's value
The rule's arguments are validated against this schema:
{'type': 'boolean'}
"""
if arg_in_use:
for partial in self.root_document.get('partials', dict()).values():
if value in partial.get('args', tuple()):
return
self._error(field, '{} is not an arg used in any partial.'.format(value))
def build_partial_description(partial_spec):
"""Create the documentation lines for a specific partial.
Generates something like this:
# This is the partial's description, from spec.yml.
# --build-arg ARG_NAME=argdefault
# this is one of the args.
# --build-arg ANOTHER_ARG=(some|choices)
# another arg.
Args:
partial_spec: A dict representing one of the partials from spec.yml. Doesn't
include the name of the partial; is a dict like { desc: ..., args: ... }.
Returns:
A commented string describing this partial.
"""
# Start from linewrapped desc field
lines = []
wrapper = textwrap.TextWrapper(
initial_indent='# ', subsequent_indent='# ', width=80)
description = wrapper.fill(partial_spec.get('desc', '( no comments )'))
lines.extend(['#', description])
# Document each arg
for arg, arg_data in partial_spec.get('args', dict()).items():
# Wrap arg description with comment lines
desc = arg_data.get('desc', '( no description )')
desc = textwrap.fill(
desc,
initial_indent='# ',
subsequent_indent='# ',
width=80,
drop_whitespace=False)
# Document (each|option|like|this)
if 'options' in arg_data:
arg_options = ' ({})'.format('|'.join(arg_data['options']))
else:
arg_options = ''
# Add usage sample
arg_use = '# --build-arg {}={}{}'.format(arg,
arg_data.get('default', '(unset)'),
arg_options)
lines.extend([arg_use, desc])
return '\n'.join(lines)
def construct_contents(partial_specs, image_spec):
"""Assemble the dockerfile contents for an image spec.
It assembles a concrete list of partial references into a single, large
string.
Also expands argument defaults, so that the resulting Dockerfile doesn't have
to be configured with --build-arg=... every time. That is, any ARG directive
will be updated with a new default value.
Args:
partial_specs: The dict from spec.yml["partials"].
image_spec: One of the dict values from spec.yml["images"].
Returns:
A string containing a valid Dockerfile based on the partials listed in
image_spec.
"""
processed_partial_strings = []
for partial_name in image_spec['partials']:
# Apply image arg-defaults to existing arg defaults
partial_spec = copy.deepcopy(partial_specs[partial_name])
args = partial_spec.get('args', dict())
for k_v in image_spec.get('arg-defaults', []):
arg, value = list(k_v.items())[0]
if arg in args:
args[arg]['default'] = value
# Read partial file contents
filename = partial_spec.get('file', partial_name)
partial_path = os.path.join(FLAGS.partial_dir,
'{}.partial.Dockerfile'.format(filename))
with open(partial_path, 'r') as f_partial:
partial_contents = f_partial.read()
# Replace ARG FOO=BAR with ARG FOO=[new-default]
for arg, arg_data in args.items():
if 'default' in arg_data and arg_data['default']:
default = '={}'.format(arg_data['default'])
else:
default = ''
partial_contents = re.sub(r'ARG {}.*'.format(arg), 'ARG {}{}'.format(
arg, default), partial_contents)
processed_partial_strings.append(partial_contents)
return '\n'.join(processed_partial_strings)
# Create a directory and its parents, even if it already exists
def mkdir_p(path):
try:
os.makedirs(path)
except OSError as e:
if e.errno != errno.EEXIST:
raise
def construct_documentation(header, partial_specs, image_spec):
"""Assemble all of the documentation for a single dockerfile.
Builds explanations of included partials and available build args.
Args:
header: The string from spec.yml["header"]; will be commented and wrapped.
partial_specs: The dict from spec.yml["partials"].
image_spec: The spec for the dockerfile being built.
Returns:
A string containing a commented header that documents the contents of the
dockerfile.
"""
# Comment and wrap header and image description
commented_header = '\n'.join(['# ' + l for l in header.splitlines()])
commented_desc = '\n'.join(
['# ' + l for l in image_spec.get('desc', '').splitlines()])
partial_descriptions = []
# Build documentation for each partial in the image
for partial in image_spec['partials']:
# Copy partial data for default args unique to this image
partial_spec = copy.deepcopy(partial_specs[partial])
args = partial_spec.get('args', dict())
# Overwrite any existing arg defaults
for k_v in image_spec.get('arg-defaults', []):
arg, value = list(k_v.items())[0]
if arg in args:
args[arg]['default'] = value
# Build the description from new args
partial_description = build_partial_description(partial_spec)
partial_descriptions.append(partial_description)
contents = [commented_header, '#', commented_desc] + partial_descriptions
return '\n'.join(contents) + '\n'
def normalize_partial_args(partial_specs):
"""Normalize the shorthand form of a partial's args specification.
Turns this:
partial:
args:
SOME_ARG: arg_value
Into this:
partial:
args:
SOME_ARG:
default: arg_value
Args:
partial_specs: The dict from spec.yml["partials"]. This dict is modified in
place.
Returns:
The modified contents of partial_specs.
"""
for _, partial in partial_specs.items():
args = partial.get('args', dict())
for arg, value in args.items():
if not isinstance(value, dict):
new_value = {'default': value}
args[arg] = new_value
return partial_specs
def flatten_args_references(image_specs):
"""Resolve all default-args in each image spec to a concrete dict.
Turns this:
example-image:
arg-defaults:
- MY_ARG: ARG_VALUE
another-example:
arg-defaults:
- ANOTHER_ARG: ANOTHER_VALUE
- example_image
Into this:
example-image:
arg-defaults:
- MY_ARG: ARG_VALUE
another-example:
arg-defaults:
- ANOTHER_ARG: ANOTHER_VALUE
- MY_ARG: ARG_VALUE
Args:
image_specs: A dict of image_spec dicts; should be the contents of the
"images" key in the global spec.yaml. This dict is modified in place and
then returned.
Returns:
The modified contents of image_specs.
"""
for _, image_spec in image_specs.items():
too_deep = 0
while str in map(type, image_spec.get('arg-defaults', [])) and too_deep < 5:
new_args = []
for arg in image_spec['arg-defaults']:
if isinstance(arg, str):
new_args.extend(image_specs[arg]['arg-defaults'])
else:
new_args.append(arg)
image_spec['arg-defaults'] = new_args
too_deep += 1
return image_specs
def flatten_partial_references(image_specs):
"""Resolve all partial references in each image spec to a concrete list.
Turns this:
example-image:
partials:
- foo
another-example:
partials:
- bar
- image: example-image
- bat
Into this:
example-image:
partials:
- foo
another-example:
partials:
- bar
- foo
- bat
Args:
image_specs: A dict of image_spec dicts; should be the contents of the
"images" key in the global spec.yaml. This dict is modified in place and
then returned.
Returns:
The modified contents of image_specs.
"""
for _, image_spec in image_specs.items():
too_deep = 0
while dict in map(type, image_spec['partials']) and too_deep < 5:
new_partials = []
for partial in image_spec['partials']:
if isinstance(partial, str):
new_partials.append(partial)
else:
new_partials.extend(image_specs[partial['image']]['partials'])
image_spec['partials'] = new_partials
too_deep += 1
return image_specs
def construct_dockerfiles(tf_spec):
"""Generate a mapping of {"cpu": <cpu dockerfile contents>, ...}.
Args:
tf_spec: The full spec.yml loaded as a python object.
Returns:
A string:string dict of short names ("cpu-devel") to Dockerfile contents.
"""
names_to_contents = dict()
image_specs = tf_spec['images']
image_specs = flatten_partial_references(image_specs)
image_specs = flatten_args_references(image_specs)
partial_specs = tf_spec['partials']
partial_specs = normalize_partial_args(partial_specs)
for name, image_spec in image_specs.items():
if not image_spec.get('create-dockerfile', True):
continue
documentation = construct_documentation(tf_spec['header'], partial_specs,
image_spec)
contents = construct_contents(partial_specs, image_spec)
names_to_contents[name] = '\n'.join([documentation, contents])
return names_to_contents
def main(argv):
if len(argv) > 1:
raise app.UsageError('Too many command-line arguments.')
with open(FLAGS.spec_file, 'r') as spec_file:
tf_spec = yaml.load(spec_file)
# Abort if spec.yaml is invalid
if FLAGS.validate:
schema = yaml.load(SCHEMA_TEXT)
v = TfDockerValidator(schema)
if not v.validate(tf_spec):
print('>> ERROR: {} is an invalid spec! The errors are:'.format(
FLAGS.spec_file))
print(yaml.dump(v.errors, indent=2))
exit(1)
else:
print('>> WARNING: Not validating {}'.format(FLAGS.spec_file))
# Generate mapping of { "cpu-devel": "<cpu-devel dockerfile contents>", ... }
names_to_contents = construct_dockerfiles(tf_spec)
# Write each completed Dockerfile
if not FLAGS.dry_run:
print('>> Emptying destination dir "{}"'.format(FLAGS.output_dir))
shutil.rmtree(FLAGS.output_dir, ignore_errors=True)
mkdir_p(FLAGS.output_dir)
else:
print('>> Skipping creation of {} (dry run)'.format(FLAGS.output_dir))
for name, contents in names_to_contents.items():
path = os.path.join(FLAGS.output_dir, name + '.Dockerfile')
if FLAGS.dry_run:
print('>> Skipping writing contents of {} (dry run)'.format(path))
print(contents)
else:
mkdir_p(FLAGS.output_dir)
print('>> Writing {}'.format(path))
with open(path, 'w') as f:
f.write(contents)
if __name__ == '__main__':
app.run(main)
export PS1="\[\e[31m\]tf-docker\[\e[m\] \[\e[33m\]\w\[\e[m\] > "
export TERM=xterm-256color
alias grep="grep --color=auto"
alias ls="ls --color=auto"
echo -e "\e[1;31m"
cat<<TF
________ _______________
___ __/__________________________________ ____/__ /________ __
__ / _ _ \_ __ \_ ___/ __ \_ ___/_ /_ __ /_ __ \_ | /| / /
_ / / __/ / / /(__ )/ /_/ / / _ __/ _ / / /_/ /_ |/ |/ /
/_/ \___//_/ /_//____/ \____//_/ /_/ /_/ \____/____/|__/
TF
echo -e "\e[0;33m"
if [[ $EUID -eq 0 ]]; then
cat <<WARN
WARNING: You are running this container as root, which can cause new files in
mounted volumes to be created as the root user on your host machine.
To avoid this, run the container by specifying your user's userid:
$ docker run -u \$(id -u):\$(id -g) args...
WARN
else
cat <<EXPL
You are running this container as user with ID $(id -u) and group $(id -g),
which should map to the ID and group for your user on the Docker host. Great!
EXPL
fi
echo -e "\e[m"
# THIS IS A GENERATED DOCKERFILE.
#
# This file was assembled from multiple pieces, whose use is documented
# below. Please refer to the the TensorFlow dockerfiles documentation for
# more information. Build args are documented as their default value.
#
# Ubuntu-based, CPU-only environment for developing changes for TensorFlow, with Jupyter included.
#
# Start from Ubuntu, with TF development packages (no GPU support)
# --build-arg UBUNTU_VERSION=16.04
# ( no description )
#
# Python is required for TensorFlow and other libraries.
# --build-arg USE_PYTHON_3_NOT_2=True
# Install python 3 over Python 2
#
# Install the latest version of Bazel and Python development tools.
#
# Configure TensorFlow's shell prompt and login tools.
#
# Launch Jupyter on execution instead of a bash prompt.
ARG UBUNTU_VERSION=16.04
FROM ubuntu:${UBUNTU_VERSION}
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
curl \
git \
libcurl3-dev \
libfreetype6-dev \
libhdf5-serial-dev \
libpng12-dev \
libzmq3-dev \
pkg-config \
python-dev \
rsync \
software-properties-common \
unzip \
zip \
zlib1g-dev \
openjdk-8-jdk \
openjdk-8-jre-headless \
&& \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
ARG USE_PYTHON_3_NOT_2=True
ARG _PY_SUFFIX=${USE_PYTHON_3_NOT_2:+3}
ARG PYTHON=python${_PY_SUFFIX}
ARG PIP=pip${_PY_SUFFIX}
RUN apt-get update && apt-get install -y \
${PYTHON} \
${PYTHON}-pip
RUN ${PIP} install --upgrade \
pip \
setuptools
RUN apt-get update && apt-get install -y \
build-essential \
curl \
git \
openjdk-8-jdk \
${PYTHON}-dev \
swig
# Install bazel
RUN echo "deb [arch=amd64] http://storage.googleapis.com/bazel-apt stable jdk1.8" | tee /etc/apt/sources.list.d/bazel.list && \
curl https://bazel.build/bazel-release.pub.gpg | apt-key add - && \
apt-get update && \
apt-get install -y bazel
COPY bashrc /etc/bash.bashrc
RUN chmod 777 /etc/bash.bashrc
RUN ${PIP} install jupyter
RUN mkdir /notebooks && chmod 777 /notebooks
RUN mkdir /.local && chmod 777 /.local
WORKDIR /notebooks
EXPOSE 8888
CMD ["bash", "-c", "source /etc/bash.bashrc && jupyter notebook --notebook-dir=/notebooks --ip 0.0.0.0 --no-browser --allow-root"]
# THIS IS A GENERATED DOCKERFILE.
#
# This file was assembled from multiple pieces, whose use is documented
# below. Please refer to the the TensorFlow dockerfiles documentation for
# more information. Build args are documented as their default value.
#
# Ubuntu-based, CPU-only environment for developing changes for TensorFlow.
#
# Start from Ubuntu, with TF development packages (no GPU support)
# --build-arg UBUNTU_VERSION=16.04
# ( no description )
#
# Python is required for TensorFlow and other libraries.
# --build-arg USE_PYTHON_3_NOT_2=True
# Install python 3 over Python 2
#
# Install the latest version of Bazel and Python development tools.
#
# Configure TensorFlow's shell prompt and login tools.
ARG UBUNTU_VERSION=16.04
FROM ubuntu:${UBUNTU_VERSION}
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
curl \
git \
libcurl3-dev \
libfreetype6-dev \
libhdf5-serial-dev \
libpng12-dev \
libzmq3-dev \
pkg-config \
python-dev \
rsync \
software-properties-common \
unzip \
zip \
zlib1g-dev \
openjdk-8-jdk \
openjdk-8-jre-headless \
&& \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
ARG USE_PYTHON_3_NOT_2=True
ARG _PY_SUFFIX=${USE_PYTHON_3_NOT_2:+3}
ARG PYTHON=python${_PY_SUFFIX}
ARG PIP=pip${_PY_SUFFIX}
RUN apt-get update && apt-get install -y \
${PYTHON} \
${PYTHON}-pip
RUN ${PIP} install --upgrade \
pip \
setuptools
RUN apt-get update && apt-get install -y \
build-essential \
curl \
git \
openjdk-8-jdk \
${PYTHON}-dev \
swig
# Install bazel
RUN echo "deb [arch=amd64] http://storage.googleapis.com/bazel-apt stable jdk1.8" | tee /etc/apt/sources.list.d/bazel.list && \
curl https://bazel.build/bazel-release.pub.gpg | apt-key add - && \
apt-get update && \
apt-get install -y bazel
COPY bashrc /etc/bash.bashrc
RUN chmod 777 /etc/bash.bashrc
# THIS IS A GENERATED DOCKERFILE.
#
# This file was assembled from multiple pieces, whose use is documented
# below. Please refer to the the TensorFlow dockerfiles documentation for
# more information. Build args are documented as their default value.
#
# Ubuntu-based, CPU-only environment for using TensorFlow, with Jupyter included.
#
# Start from Ubuntu (no GPU support)
# --build-arg UBUNTU_VERSION=16.04
# ( no description )
#
# Python is required for TensorFlow and other libraries.
# --build-arg USE_PYTHON_3_NOT_2=True
# Install python 3 over Python 2
#
# Install the TensorFlow Python package.
# --build-arg TF_PACKAGE=tensorflow (tensorflow|tensorflow-gpu|tf-nightly|tf-nightly-gpu)
# The specific TensorFlow Python package to install
#
# Configure TensorFlow's shell prompt and login tools.
#
# Launch Jupyter on execution instead of a bash prompt.
ARG UBUNTU_VERSION=16.04
FROM ubuntu:${UBUNTU_VERSION}
ARG USE_PYTHON_3_NOT_2=True
ARG _PY_SUFFIX=${USE_PYTHON_3_NOT_2:+3}
ARG PYTHON=python${_PY_SUFFIX}
ARG PIP=pip${_PY_SUFFIX}
RUN apt-get update && apt-get install -y \
${PYTHON} \
${PYTHON}-pip
RUN ${PIP} install --upgrade \
pip \
setuptools
ARG TF_PACKAGE=tensorflow
RUN ${PIP} install ${TF_PACKAGE}
COPY bashrc /etc/bash.bashrc
RUN chmod 777 /etc/bash.bashrc
RUN ${PIP} install jupyter
RUN mkdir /notebooks && chmod 777 /notebooks
RUN mkdir /.local && chmod 777 /.local
WORKDIR /notebooks
EXPOSE 8888
CMD ["bash", "-c", "source /etc/bash.bashrc && jupyter notebook --notebook-dir=/notebooks --ip 0.0.0.0 --no-browser --allow-root"]
# THIS IS A GENERATED DOCKERFILE.
#
# This file was assembled from multiple pieces, whose use is documented
# below. Please refer to the the TensorFlow dockerfiles documentation for
# more information. Build args are documented as their default value.
#
# Ubuntu-based, CPU-only environment for using TensorFlow
#
# Start from Ubuntu (no GPU support)
# --build-arg UBUNTU_VERSION=16.04
# ( no description )
#
# Python is required for TensorFlow and other libraries.
# --build-arg USE_PYTHON_3_NOT_2=True
# Install python 3 over Python 2
#
# Install the TensorFlow Python package.
# --build-arg TF_PACKAGE=tensorflow (tensorflow|tensorflow-gpu|tf-nightly|tf-nightly-gpu)
# The specific TensorFlow Python package to install
#
# Configure TensorFlow's shell prompt and login tools.
ARG UBUNTU_VERSION=16.04
FROM ubuntu:${UBUNTU_VERSION}
ARG USE_PYTHON_3_NOT_2=True
ARG _PY_SUFFIX=${USE_PYTHON_3_NOT_2:+3}
ARG PYTHON=python${_PY_SUFFIX}
ARG PIP=pip${_PY_SUFFIX}
RUN apt-get update && apt-get install -y \
${PYTHON} \
${PYTHON}-pip
RUN ${PIP} install --upgrade \
pip \
setuptools
ARG TF_PACKAGE=tensorflow
RUN ${PIP} install ${TF_PACKAGE}
COPY bashrc /etc/bash.bashrc
RUN chmod 777 /etc/bash.bashrc
# THIS IS A GENERATED DOCKERFILE.
#
# This file was assembled from multiple pieces, whose use is documented
# below. Please refer to the the TensorFlow dockerfiles documentation for
# more information. Build args are documented as their default value.
#
# Ubuntu-based, Nvidia-GPU-enabled environment for developing changes for TensorFlow, with Jupyter included.
#
# Start from Nvidia's Ubuntu base image with CUDA and CuDNN, with TF development
# packages.
# --build-arg UBUNTU_VERSION=16.04
# ( no description )
#
# Python is required for TensorFlow and other libraries.
# --build-arg USE_PYTHON_3_NOT_2=True
# Install python 3 over Python 2
#
# Install the latest version of Bazel and Python development tools.
#
# Configure TensorFlow's shell prompt and login tools.
#
# Launch Jupyter on execution instead of a bash prompt.
ARG UBUNTU_VERSION=16.04
FROM nvidia/cuda:9.0-base-ubuntu${UBUNTU_VERSION}
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
cuda-command-line-tools-9-0 \
cuda-cublas-dev-9-0 \
cuda-cudart-dev-9-0 \
cuda-cufft-dev-9-0 \
cuda-curand-dev-9-0 \
cuda-cusolver-dev-9-0 \
cuda-cusparse-dev-9-0 \
curl \
git \
libcudnn7=7.1.4.18-1+cuda9.0 \
libcudnn7-dev=7.1.4.18-1+cuda9.0 \
libnccl2=2.2.13-1+cuda9.0 \
libnccl-dev=2.2.13-1+cuda9.0 \
libcurl3-dev \
libfreetype6-dev \
libhdf5-serial-dev \
libpng12-dev \
libzmq3-dev \
pkg-config \
rsync \
software-properties-common \
unzip \
zip \
zlib1g-dev \
wget \
&& \
rm -rf /var/lib/apt/lists/* && \
find /usr/local/cuda-9.0/lib64/ -type f -name 'lib*_static.a' -not -name 'libcudart_static.a' -delete && \
rm /usr/lib/x86_64-linux-gnu/libcudnn_static_v7.a
# Link NCCL libray and header where the build script expects them.
RUN mkdir /usr/local/cuda-9.0/lib && \
ln -s /usr/lib/x86_64-linux-gnu/libnccl.so.2 /usr/local/cuda/lib/libnccl.so.2 && \
ln -s /usr/include/nccl.h /usr/local/cuda/include/nccl.h
# TODO(tobyboyd): Remove after license is excluded from BUILD file.
RUN gunzip /usr/share/doc/libnccl2/NCCL-SLA.txt.gz && \
cp /usr/share/doc/libnccl2/NCCL-SLA.txt /usr/local/cuda/
ARG USE_PYTHON_3_NOT_2=True
ARG _PY_SUFFIX=${USE_PYTHON_3_NOT_2:+3}
ARG PYTHON=python${_PY_SUFFIX}
ARG PIP=pip${_PY_SUFFIX}
RUN apt-get update && apt-get install -y \
${PYTHON} \
${PYTHON}-pip
RUN ${PIP} install --upgrade \
pip \
setuptools
RUN apt-get update && apt-get install -y \
build-essential \
curl \
git \
openjdk-8-jdk \
${PYTHON}-dev \
swig
# Install bazel
RUN echo "deb [arch=amd64] http://storage.googleapis.com/bazel-apt stable jdk1.8" | tee /etc/apt/sources.list.d/bazel.list && \
curl https://bazel.build/bazel-release.pub.gpg | apt-key add - && \
apt-get update && \
apt-get install -y bazel
COPY bashrc /etc/bash.bashrc
RUN chmod 777 /etc/bash.bashrc
RUN ${PIP} install jupyter
RUN mkdir /notebooks && chmod 777 /notebooks
RUN mkdir /.local && chmod 777 /.local
WORKDIR /notebooks
EXPOSE 8888
CMD ["bash", "-c", "source /etc/bash.bashrc && jupyter notebook --notebook-dir=/notebooks --ip 0.0.0.0 --no-browser --allow-root"]
# THIS IS A GENERATED DOCKERFILE.
#
# This file was assembled from multiple pieces, whose use is documented
# below. Please refer to the the TensorFlow dockerfiles documentation for
# more information. Build args are documented as their default value.
#
# Ubuntu-based, Nvidia-GPU-enabled environment for developing changes for TensorFlow.
#
# Start from Nvidia's Ubuntu base image with CUDA and CuDNN, with TF development
# packages.
# --build-arg UBUNTU_VERSION=16.04
# ( no description )
#
# Python is required for TensorFlow and other libraries.
# --build-arg USE_PYTHON_3_NOT_2=True
# Install python 3 over Python 2
#
# Install the latest version of Bazel and Python development tools.
#
# Configure TensorFlow's shell prompt and login tools.
ARG UBUNTU_VERSION=16.04
FROM nvidia/cuda:9.0-base-ubuntu${UBUNTU_VERSION}
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
cuda-command-line-tools-9-0 \
cuda-cublas-dev-9-0 \
cuda-cudart-dev-9-0 \
cuda-cufft-dev-9-0 \
cuda-curand-dev-9-0 \
cuda-cusolver-dev-9-0 \
cuda-cusparse-dev-9-0 \
curl \
git \
libcudnn7=7.1.4.18-1+cuda9.0 \
libcudnn7-dev=7.1.4.18-1+cuda9.0 \
libnccl2=2.2.13-1+cuda9.0 \
libnccl-dev=2.2.13-1+cuda9.0 \
libcurl3-dev \
libfreetype6-dev \
libhdf5-serial-dev \
libpng12-dev \
libzmq3-dev \
pkg-config \
rsync \
software-properties-common \
unzip \
zip \
zlib1g-dev \
wget \
&& \
rm -rf /var/lib/apt/lists/* && \
find /usr/local/cuda-9.0/lib64/ -type f -name 'lib*_static.a' -not -name 'libcudart_static.a' -delete && \
rm /usr/lib/x86_64-linux-gnu/libcudnn_static_v7.a
# Link NCCL libray and header where the build script expects them.
RUN mkdir /usr/local/cuda-9.0/lib && \
ln -s /usr/lib/x86_64-linux-gnu/libnccl.so.2 /usr/local/cuda/lib/libnccl.so.2 && \
ln -s /usr/include/nccl.h /usr/local/cuda/include/nccl.h
# TODO(tobyboyd): Remove after license is excluded from BUILD file.
RUN gunzip /usr/share/doc/libnccl2/NCCL-SLA.txt.gz && \
cp /usr/share/doc/libnccl2/NCCL-SLA.txt /usr/local/cuda/
ARG USE_PYTHON_3_NOT_2=True
ARG _PY_SUFFIX=${USE_PYTHON_3_NOT_2:+3}
ARG PYTHON=python${_PY_SUFFIX}
ARG PIP=pip${_PY_SUFFIX}
RUN apt-get update && apt-get install -y \
${PYTHON} \
${PYTHON}-pip
RUN ${PIP} install --upgrade \
pip \
setuptools
RUN apt-get update && apt-get install -y \
build-essential \
curl \
git \
openjdk-8-jdk \
${PYTHON}-dev \
swig
# Install bazel
RUN echo "deb [arch=amd64] http://storage.googleapis.com/bazel-apt stable jdk1.8" | tee /etc/apt/sources.list.d/bazel.list && \
curl https://bazel.build/bazel-release.pub.gpg | apt-key add - && \
apt-get update && \
apt-get install -y bazel
COPY bashrc /etc/bash.bashrc
RUN chmod 777 /etc/bash.bashrc
# THIS IS A GENERATED DOCKERFILE.
#
# This file was assembled from multiple pieces, whose use is documented
# below. Please refer to the the TensorFlow dockerfiles documentation for
# more information. Build args are documented as their default value.
#
# Ubuntu-based, Nvidia-GPU-enabled environment for using TensorFlow, with Jupyter included.
#
# NVIDIA with CUDA and CuDNN, no dev stuff
# --build-arg UBUNTU_VERSION=16.04
# ( no description )
#
# Python is required for TensorFlow and other libraries.
# --build-arg USE_PYTHON_3_NOT_2=True
# Install python 3 over Python 2
#
# Install the TensorFlow Python package.
# --build-arg TF_PACKAGE=tensorflow-gpu (tensorflow|tensorflow-gpu|tf-nightly|tf-nightly-gpu)
# The specific TensorFlow Python package to install
#
# Configure TensorFlow's shell prompt and login tools.
#
# Launch Jupyter on execution instead of a bash prompt.
FROM nvidia/cuda:9.0-base-ubuntu16.04
# Pick up some TF dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
cuda-command-line-tools-9-0 \
cuda-cublas-9-0 \
cuda-cufft-9-0 \
cuda-curand-9-0 \
cuda-cusolver-9-0 \
cuda-cusparse-9-0 \
libcudnn7=7.1.4.18-1+cuda9.0 \
libnccl2=2.2.13-1+cuda9.0 \
libfreetype6-dev \
libhdf5-serial-dev \
libpng12-dev \
libzmq3-dev \
pkg-config \
software-properties-common \
unzip \
&& \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
ARG USE_PYTHON_3_NOT_2=True
ARG _PY_SUFFIX=${USE_PYTHON_3_NOT_2:+3}
ARG PYTHON=python${_PY_SUFFIX}
ARG PIP=pip${_PY_SUFFIX}
RUN apt-get update && apt-get install -y \
${PYTHON} \
${PYTHON}-pip
RUN ${PIP} install --upgrade \
pip \
setuptools
ARG TF_PACKAGE=tensorflow-gpu
RUN ${PIP} install ${TF_PACKAGE}
COPY bashrc /etc/bash.bashrc
RUN chmod 777 /etc/bash.bashrc
RUN ${PIP} install jupyter
RUN mkdir /notebooks && chmod 777 /notebooks
RUN mkdir /.local && chmod 777 /.local
WORKDIR /notebooks
EXPOSE 8888
CMD ["bash", "-c", "source /etc/bash.bashrc && jupyter notebook --notebook-dir=/notebooks --ip 0.0.0.0 --no-browser --allow-root"]
# THIS IS A GENERATED DOCKERFILE.
#
# This file was assembled from multiple pieces, whose use is documented
# below. Please refer to the the TensorFlow dockerfiles documentation for
# more information. Build args are documented as their default value.
#
# Ubuntu-based, Nvidia-GPU-enabled environment for using TensorFlow.
#
# NVIDIA with CUDA and CuDNN, no dev stuff
# --build-arg UBUNTU_VERSION=16.04
# ( no description )
#
# Python is required for TensorFlow and other libraries.
# --build-arg USE_PYTHON_3_NOT_2=True
# Install python 3 over Python 2
#
# Install the TensorFlow Python package.
# --build-arg TF_PACKAGE=tensorflow-gpu (tensorflow|tensorflow-gpu|tf-nightly|tf-nightly-gpu)
# The specific TensorFlow Python package to install
#
# Configure TensorFlow's shell prompt and login tools.
FROM nvidia/cuda:9.0-base-ubuntu16.04
# Pick up some TF dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
cuda-command-line-tools-9-0 \
cuda-cublas-9-0 \
cuda-cufft-9-0 \
cuda-curand-9-0 \
cuda-cusolver-9-0 \
cuda-cusparse-9-0 \
libcudnn7=7.1.4.18-1+cuda9.0 \
libnccl2=2.2.13-1+cuda9.0 \
libfreetype6-dev \
libhdf5-serial-dev \
libpng12-dev \
libzmq3-dev \
pkg-config \
software-properties-common \
unzip \
&& \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
ARG USE_PYTHON_3_NOT_2=True
ARG _PY_SUFFIX=${USE_PYTHON_3_NOT_2:+3}
ARG PYTHON=python${_PY_SUFFIX}
ARG PIP=pip${_PY_SUFFIX}
RUN apt-get update && apt-get install -y \
${PYTHON} \
${PYTHON}-pip
RUN ${PIP} install --upgrade \
pip \
setuptools
ARG TF_PACKAGE=tensorflow-gpu
RUN ${PIP} install ${TF_PACKAGE}
COPY bashrc /etc/bash.bashrc
RUN chmod 777 /etc/bash.bashrc
RUN apt-get update && apt-get install -y \
build-essential \
curl \
git \
openjdk-8-jdk \
${PYTHON}-dev \
swig
# Install bazel
RUN echo "deb [arch=amd64] http://storage.googleapis.com/bazel-apt stable jdk1.8" | tee /etc/apt/sources.list.d/bazel.list && \
curl https://bazel.build/bazel-release.pub.gpg | apt-key add - && \
apt-get update && \
apt-get install -y bazel
RUN ${PIP} install jupyter
RUN mkdir /notebooks && chmod 777 /notebooks
RUN mkdir /.local && chmod 777 /.local
WORKDIR /notebooks
EXPOSE 8888
CMD ["bash", "-c", "source /etc/bash.bashrc && jupyter notebook --notebook-dir=/notebooks --ip 0.0.0.0 --no-browser --allow-root"]
ARG UBUNTU_VERSION=16.04
FROM nvidia/cuda:9.0-base-ubuntu${UBUNTU_VERSION}
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
cuda-command-line-tools-9-0 \
cuda-cublas-dev-9-0 \
cuda-cudart-dev-9-0 \
cuda-cufft-dev-9-0 \
cuda-curand-dev-9-0 \
cuda-cusolver-dev-9-0 \
cuda-cusparse-dev-9-0 \
curl \
git \
libcudnn7=7.1.4.18-1+cuda9.0 \
libcudnn7-dev=7.1.4.18-1+cuda9.0 \
libnccl2=2.2.13-1+cuda9.0 \
libnccl-dev=2.2.13-1+cuda9.0 \
libcurl3-dev \
libfreetype6-dev \
libhdf5-serial-dev \
libpng12-dev \
libzmq3-dev \
pkg-config \
rsync \
software-properties-common \
unzip \
zip \
zlib1g-dev \
wget \
&& \
rm -rf /var/lib/apt/lists/* && \
find /usr/local/cuda-9.0/lib64/ -type f -name 'lib*_static.a' -not -name 'libcudart_static.a' -delete && \
rm /usr/lib/x86_64-linux-gnu/libcudnn_static_v7.a
# Link NCCL libray and header where the build script expects them.
RUN mkdir /usr/local/cuda-9.0/lib && \
ln -s /usr/lib/x86_64-linux-gnu/libnccl.so.2 /usr/local/cuda/lib/libnccl.so.2 && \
ln -s /usr/include/nccl.h /usr/local/cuda/include/nccl.h
# TODO(tobyboyd): Remove after license is excluded from BUILD file.
RUN gunzip /usr/share/doc/libnccl2/NCCL-SLA.txt.gz && \
cp /usr/share/doc/libnccl2/NCCL-SLA.txt /usr/local/cuda/
FROM nvidia/cuda:9.0-base-ubuntu16.04
# Pick up some TF dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
cuda-command-line-tools-9-0 \
cuda-cublas-9-0 \
cuda-cufft-9-0 \
cuda-curand-9-0 \
cuda-cusolver-9-0 \
cuda-cusparse-9-0 \
libcudnn7=7.1.4.18-1+cuda9.0 \
libnccl2=2.2.13-1+cuda9.0 \
libfreetype6-dev \
libhdf5-serial-dev \
libpng12-dev \
libzmq3-dev \
pkg-config \
software-properties-common \
unzip \
&& \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
ARG USE_PYTHON_3_NOT_2
ARG _PY_SUFFIX=${USE_PYTHON_3_NOT_2:+3}
ARG PYTHON=python${_PY_SUFFIX}
ARG PIP=pip${_PY_SUFFIX}
RUN apt-get update && apt-get install -y \
${PYTHON} \
${PYTHON}-pip
RUN ${PIP} install --upgrade \
pip \
setuptools
COPY bashrc /etc/bash.bashrc
RUN chmod 777 /etc/bash.bashrc
ARG UBUNTU_VERSION=16.04
FROM ubuntu:${UBUNTU_VERSION}
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
curl \
git \
libcurl3-dev \
libfreetype6-dev \
libhdf5-serial-dev \
libpng12-dev \
libzmq3-dev \
pkg-config \
python-dev \
rsync \
software-properties-common \
unzip \
zip \
zlib1g-dev \
openjdk-8-jdk \
openjdk-8-jre-headless \
&& \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
ARG UBUNTU_VERSION=16.04
FROM ubuntu:${UBUNTU_VERSION}
# ======
# HEADER
# ======
# This is commented-out and prepended to each generated Dockerfile.
header: |
THIS IS A GENERATED DOCKERFILE.
This file was assembled from multiple pieces, whose use is documented
below. Please refer to the the TensorFlow dockerfiles documentation for
more information. Build args are documented as their default value.
# ========
# PARTIALS
# ========
# Represent and document pieces of a Dockerfile.
# Spec:
#
# name: the name of the partial, referenced from other sections
# desc: A description, inserted later into the Dockerfile
# file: Alternative file prefix, e.g. file.partial.Dockerfile (default = name)
# args: A dict of ARGs in the Dockerfile; each entry has the format
# ARG_NAME: VALUE where VALUE is
# - a concrete value: becomes the default
# - a dict:
# desc: Arg description
# default: Default value for the arg; is written to the Dockerfile
# options: List of strings, part of documentation
partials:
ubuntu:
desc: Start from Ubuntu (no GPU support)
args:
UBUNTU_VERSION: 16.04
ubuntu-devel:
desc: Start from Ubuntu, with TF development packages (no GPU support)
args:
UBUNTU_VERSION: 16.04
bazel:
desc: Install the latest version of Bazel and Python development tools.
nvidia:
desc: NVIDIA with CUDA and CuDNN, no dev stuff
args:
UBUNTU_VERSION: 16.04
nvidia-devel:
desc: >
Start from Nvidia's Ubuntu base image with CUDA and CuDNN, with TF
development packages.
args:
UBUNTU_VERSION: 16.04
python:
desc: Python is required for TensorFlow and other libraries.
args:
USE_PYTHON_3_NOT_2:
default: true
desc: Install python 3 over Python 2
tensorflow:
desc: Install the TensorFlow Python package.
args:
TF_PACKAGE:
default: tensorflow
options:
- tensorflow
- tensorflow-gpu
- tf-nightly
- tf-nightly-gpu
desc: The specific TensorFlow Python package to install
shell:
desc: Configure TensorFlow's shell prompt and login tools.
jupyter:
desc: Launch Jupyter on execution instead of a bash prompt.
# ===========
# DOCKERFILES
# ===========
# Represent dockerfiles.
# Spec:
#
# name: the name of the image, referenced from other sections
# desc: A description, inserted later into the Dockerfile
# create-dockerfile: Create a dockerfile based on this. Useful for creating
# base images. Default is true
# partials: List of VALUEs, where a VALUE is either:
# - the name of a partial, which inserts that partial into this file
# - image: [name of another image], which inserts the partials from that
# image into this file
# arg-defaults: List of VALUEs, where a VALUE is either:
# - the name of another image, which loads the default args from that image
# - ARG_NAME: VALUE, which is exactly what you'd expect
images:
nodev:
create-dockerfile: false
partials:
- python
- tensorflow
- shell
dev:
create-dockerfile: false
partials:
- python
- bazel
- shell
cpu:
desc: Ubuntu-based, CPU-only environment for using TensorFlow
partials:
- ubuntu
- image: nodev
cpu-devel:
desc: >
Ubuntu-based, CPU-only environment for developing changes for
TensorFlow.
partials:
- ubuntu-devel
- image: dev
nvidia:
desc: Ubuntu-based, Nvidia-GPU-enabled environment for using TensorFlow.
arg-defaults:
- TF_PACKAGE: tensorflow-gpu
partials:
- nvidia
- image: nodev
nvidia-devel:
desc: >
Ubuntu-based, Nvidia-GPU-enabled environment for developing changes
for TensorFlow.
arg-defaults:
- TF_PACKAGE: tensorflow-gpu
partials:
- nvidia-devel
- image: dev
cpu-jupyter:
desc: >
Ubuntu-based, CPU-only environment for using TensorFlow, with Jupyter
included.
partials:
- image: cpu
- jupyter
cpu-devel-jupyter:
desc: >
Ubuntu-based, CPU-only environment for developing changes for
TensorFlow, with Jupyter included.
partials:
- image: cpu-devel
- jupyter
nvidia-jupyter:
desc: >
Ubuntu-based, Nvidia-GPU-enabled environment for using TensorFlow, with
Jupyter included.
arg-defaults:
- nvidia
partials:
- image: nvidia
- jupyter
nvidia-devel-jupyter:
desc: >
Ubuntu-based, Nvidia-GPU-enabled environment for developing changes for
TensorFlow, with Jupyter included.
arg-defaults:
- nvidia-devel
partials:
- image: nvidia-devel
- jupyter
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册