@@ -4,197 +4,7 @@ This directory contains the initial open-source implementation of the
distributed TensorFlow runtime, using [gRPC](http://grpc.io) for inter-process
communication.
## Quick start
The gRPC server is included as part of the nightly PIP packages, which you can download from [the continuous integration site](http://ci.tensorflow.org/view/Nightly/). Alternatively, you can build an up-to-date PIP package by following
Once you have successfully built the distributed TensorFlow components, you can
test your installation by starting a local server as follows:
```shell
# Start a TensorFlow server as a single-process "cluster".
$ python
>>> import tensorflow as tf
>>> c = tf.constant("Hello, distributed TensorFlow!")
>>> server = tf.GrpcServer.new_local_server()
>>> sess = tf.Session(server.target)
>>> sess.run(c)
'Hello, distributed TensorFlow!'
```
## Cluster definition
The `tf.GrpcServer.new_local_server()` method creates a single-process cluster.
To create a more realistic distributed cluster, you create a `tf.GrpcServer` by
passing in a `tf.ServerDef` that defines the membership of a TensorFlow cluster,
and then run multiple processes that each have the same cluster definition.
A `tf.ServerDef` comprises a cluster definition (`tf.ClusterDef`), which is the
same for all servers in a cluster; and a job name and task index that are unique
to a particular cluster.
For constructing a `tf.ClusterDef`, the `tf.make_cluster_def()` function enables you to specify the jobs and tasks as a Python dictionary, mapping job names to lists of network addresses. For example:
Once you have successfully built the distributed TensorFlow components, you can
test your installation by starting a local server as follows:
```shell
# Start a TensorFlow server as a single-process "cluster".
$ python
>>> import tensorflow as tf
>>> c = tf.constant("Hello, distributed TensorFlow!")
>>> server = tf.GrpcServer.new_local_server()
>>> sess = tf.Session(server.target)
>>> sess.run(c)
'Hello, distributed TensorFlow!'
```
## Cluster definition
The `tf.GrpcServer.new_local_server()` method creates a single-process cluster.
To create a more realistic distributed cluster, you create a `tf.GrpcServer` by
passing in a `tf.ServerDef` that defines the membership of a TensorFlow cluster,
and then run multiple processes that each have the same cluster definition.
A `tf.ServerDef` comprises a cluster definition (`tf.ClusterDef`), which is the
same for all servers in a cluster; and a job name and task index that are unique
to a particular cluster.
For constructing a `tf.ClusterDef`, the `tf.make_cluster_def()` function enables you to specify the jobs and tasks as a Python dictionary, mapping job names to lists of network addresses. For example: