提交 d95fa1bb 编写于 作者: J Josh Levenberg 提交者: TensorFlower Gardener

Update generated Python Op docs.

Change: 118369028
上级 e745b2b0
......@@ -1971,7 +1971,7 @@ want them run by *N* threads.
- - -
### `tf.train.batch(tensor_list, batch_size, num_threads=1, capacity=32, enqueue_many=False, shapes=None, shared_name=None, name=None)` {#batch}
### `tf.train.batch(tensor_list, batch_size, num_threads=1, capacity=32, enqueue_many=False, shapes=None, dynamic_pad=False, shared_name=None, name=None)` {#batch}
Creates batches of tensors in `tensor_list`.
......@@ -1995,10 +1995,18 @@ operation is feeding another input queue, its queue runner will catch
this exception, however, if this operation is used in your main thread
you are responsible for catching this yourself.
*N.B.:* You must ensure that either (i) the `shapes` argument is
passed, or (ii) all of the tensors in `tensor_list` must have
fully-defined shapes. `ValueError` will be raised if neither of
these conditions holds.
*N.B.:* If `dynamic_pad` is `False`, you must ensure that either
(i) the `shapes` argument is passed, or (ii) all of the tensors in
`tensor_list` must have fully-defined shapes. `ValueError` will be
raised if neither of these conditions holds.
If `dynamic_pad` is `True`, it is sufficient that the *rank* of the
tensors is known, but individual dimensions may have shape `None`.
In this case, for each enqueue the dimensions with value `None`
may have a variable length; upon dequeue, the output tensors will be padded
on the right to the maximum shape of the tensors in the current minibatch.
For numbers, this padding takes value 0. For strings, this padding is
the empty string. See `PaddingFIFOQueue` for more info.
##### Args:
......@@ -2010,6 +2018,9 @@ these conditions holds.
* <b>`enqueue_many`</b>: Whether each tensor in `tensor_list` is a single example.
* <b>`shapes`</b>: (Optional) The shapes for each example. Defaults to the
inferred shapes for `tensor_list`.
* <b>`dynamic_pad`</b>: Boolean. Allow variable dimensions in input shapes.
The given dimensions are padded upon dequeue so that tensors within a
batch have the same shapes.
* <b>`shared_name`</b>: (optional). If set, this queue will be shared under the given
name across multiple sessions.
* <b>`name`</b>: (Optional) A name for the operations.
......@@ -2027,7 +2038,7 @@ these conditions holds.
- - -
### `tf.train.batch_join(tensor_list_list, batch_size, capacity=32, enqueue_many=False, shapes=None, shared_name=None, name=None)` {#batch_join}
### `tf.train.batch_join(tensor_list_list, batch_size, capacity=32, enqueue_many=False, shapes=None, dynamic_pad=False, shared_name=None, name=None)` {#batch_join}
Runs a list of tensors to fill a queue to create batches of examples.
......@@ -2061,10 +2072,18 @@ operation is feeding another input queue, its queue runner will catch
this exception, however, if this operation is used in your main thread
you are responsible for catching this yourself.
*N.B.:* You must ensure that either (i) the `shapes` argument is
passed, or (ii) all of the tensors in `tensor_list_list` must have
fully-defined shapes. `ValueError` will be raised if neither of
these conditions holds.
*N.B.:* If `dynamic_pad` is `False`, you must ensure that either
(i) the `shapes` argument is passed, or (ii) all of the tensors in
`tensor_list` must have fully-defined shapes. `ValueError` will be
raised if neither of these conditions holds.
If `dynamic_pad` is `True`, it is sufficient that the *rank* of the
tensors is known, but individual dimensions may have value `None`.
In this case, for each enqueue the dimensions with value `None`
may have a variable length; upon dequeue, the output tensors will be padded
on the right to the maximum shape of the tensors in the current minibatch.
For numbers, this padding takes value 0. For strings, this padding is
the empty string. See `PaddingFIFOQueue` for more info.
##### Args:
......@@ -2076,6 +2095,9 @@ these conditions holds.
example.
* <b>`shapes`</b>: (Optional) The shapes for each example. Defaults to the
inferred shapes for `tensor_list_list[i]`.
* <b>`dynamic_pad`</b>: Boolean. Allow variable dimensions in input shapes.
The given dimensions are padded upon dequeue so that tensors within a
batch have the same shapes.
* <b>`shared_name`</b>: (Optional) If set, this queue will be shared under the given
name across multiple sessions.
* <b>`name`</b>: (Optional) A name for the operations.
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册