提交 15f608f9 编写于 作者: A A. Unique TensorFlower 提交者: TensorFlower Gardener

Update generated Python Op docs.

Change: 138017961
上级 593c8e44
......@@ -2240,6 +2240,92 @@ count ==> [2, 1, 3, 1, 2]
* <b>`count`</b>: A `Tensor` of type `out_idx`. 1-D.
- - -
### `tf.scatter_nd(indices, updates, shape, name=None)` {#scatter_nd}
Creates a new tensor by applying sparse `updates` to individual values or slices within a zero tensor of the given `shape` tensor according to indices.
This operator is the inverse of the [tf.gather_nd](#gather_nd) operator which extracts values or slices from a given tensor.
TODO(simister): Add a link to Variable.__getitem__ documentation on slice syntax.
`shape` is a `TensorShape` with rank `P` and `indices` is a `Tensor` of rank `Q`.
`indices` must be integer tensor, containing indices into `shape`.
It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`.
The innermost dimension of `indices` (with length `K`) corresponds to
indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th
dimension of `shape`.
`updates` is Tensor of rank `Q-1+P-K` with shape:
```
[d_0, ..., d_{Q-2}, shape[K], ..., shape[P-1]].
```
The simplest form of scatter is to insert individual elements in a tensor by index. For example, say we want to insert 4 scattered elements in a rank-1 tensor with 8 elements.
<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../../images/ScatterNd1.png" alt>
</div>
In Python, this scatter operation would look like this:
indices = tf.constant([[4], [3], [1], [7]])
updates = tf.constant([9, 10, 11, 12])
shape = tf.constant([8])
scatter = tf.scatter_nd(indices, updates, shape)
with tf.Session() as sess:
print sess.run(scatter)
The resulting tensor would look like this:
[0, 11, 0, 10, 9, 0, 0, 12]
We can also, insert entire slices of a higher rank tensor all at once. For example, if we wanted to insert two slices in the first dimension of a rank-3 tensor with two matrices of new values.
<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../../images/ScatterNd2.png" alt>
</div>
In Python, this scatter operation would look like this:
indices = tf.constant([[0], [2]])
updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6],
[7, 7, 7, 7], [8, 8, 8, 8]],
[[5, 5, 5, 5], [6, 6, 6, 6],
[7, 7, 7, 7], [8, 8, 8, 8]]])
shape = tf.constant([4, 4, 4])
scatter = tf.scatter_nd(indices, updates, shape)
with tf.Session() as sess:
print sess.run(scatter)
The resulting tensor would look like this:
[[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]],
[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]]
##### Args:
* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
A Tensor. Must be one of the following types: int32, int64. A tensor of indices into ref.
* <b>`updates`</b>: A `Tensor`.
A Tensor. Must have the same type as tensor. A tensor of updated values to store in ref.
* <b>`shape`</b>: A `Tensor`. Must have the same type as `indices`.
A vector. The shape of the resulting tensor.
* <b>`name`</b>: A name for the operation (optional).
##### Returns:
A `Tensor`. Has the same type as `updates`.
A new tensor with the given shape and updates applied according to the indices.
- - -
### `tf.dynamic_partition(data, partitions, num_partitions, name=None)` {#dynamic_partition}
......
### `tf.scatter_nd_sub(ref, indices, updates, use_locking=None, name=None)` {#scatter_nd_sub}
Applies sparse subtraction between `updates` and individual values or slices within a given variable according to `indices`.
`ref` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`.
`indices` must be integer tensor, containing indices into `ref`.
It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`.
The innermost dimension of `indices` (with length `K`) corresponds to
indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th
dimension of `ref`.
`updates` is `Tensor` of rank `Q-1+P-K` with shape:
```
[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].
```
For example, say we want to subtract 4 scattered elements from a rank-1 tensor with 8 elements. In Python, that subtraction would look like this:
ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])
indices = tf.constant([[4], [3], [1], [7]])
updates = tf.constant([9, 10, 11, 12])
sub = tf.scatter_nd_sub(ref, indices, updates)
with tf.Session() as sess:
print sess.run(sub)
The resulting update to ref would look like this:
[1, -9, 3, -6, -4, 6, 7, -4]
See [tf.scatter_nd](#scatter_nd) for more details about how to make updates to slices.
##### Args:
* <b>`ref`</b>: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
A mutable Tensor. Should be from a Variable node.
* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
A Tensor. Must be one of the following types: int32, int64. A tensor of indices into ref.
* <b>`updates`</b>: A `Tensor`. Must have the same type as `ref`.
A Tensor. Must have the same type as ref. A tensor of updated values to subtract from ref.
* <b>`use_locking`</b>: An optional `bool`. Defaults to `False`.
An optional bool. Defaults to True. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
* <b>`name`</b>: A name for the operation (optional).
##### Returns:
A mutable `Tensor`. Has the same type as `ref`.
Same as ref. Returned as a convenience for operations that want to use the updated values after the update is done.
### `tf.scatter_nd_add(ref, indices, updates, use_locking=None, name=None)` {#scatter_nd_add}
Applies sparse addition between `updates` and individual values or slices within a given variable according to `indices`.
`ref` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`.
`indices` must be integer tensor, containing indices into `ref`.
It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`.
The innermost dimension of `indices` (with length `K`) corresponds to
indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th
dimension of `ref`.
`updates` is `Tensor` of rank `Q-1+P-K` with shape:
```
[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].
```
For example, say we want to add 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that addition would look like this:
ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])
indices = tf.constant([[4], [3], [1], [7]])
updates = tf.constant([9, 10, 11, 12])
add = tf.scatter_nd_add(ref, indices, updates)
with tf.Session() as sess:
print sess.run(add)
The resulting update to ref would look like this:
[1, 13, 3, 14, 14, 6, 7, 20]
See [tf.scatter_nd](#scatter_nd) for more details about how to make updates to slices.
##### Args:
* <b>`ref`</b>: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
A mutable Tensor. Should be from a Variable node.
* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
A Tensor. Must be one of the following types: int32, int64. A tensor of indices into ref.
* <b>`updates`</b>: A `Tensor`. Must have the same type as `ref`.
A Tensor. Must have the same type as ref. A tensor of updated values to add to ref.
* <b>`use_locking`</b>: An optional `bool`. Defaults to `False`.
An optional bool. Defaults to True. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
* <b>`name`</b>: A name for the operation (optional).
##### Returns:
A mutable `Tensor`. Has the same type as `ref`.
Same as ref. Returned as a convenience for operations that want to use the updated values after the update is done.
### `tf.scatter_nd_div(ref, indices, updates, use_locking=None, name=None)` {#scatter_nd_div}
Applies sparse subtraction between `updates` and individual values or slices within a given variable according to `indices`.
`ref` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`.
`indices` must be integer tensor, containing indices into `ref`.
It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`.
The innermost dimension of `indices` (with length `K`) corresponds to
indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th
dimension of `ref`.
`updates` is `Tensor` of rank `Q-1+P-K` with shape:
```
[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].
```
For example, say we want to divide a rank-1 tensor with 8 elements by 4 scattered elements. In Python, that division would look like this:
ref = tf.Variable([10, 20, 30, 40, 50, 60, 70, 80])
indices = tf.constant([[4], [3], [1], [7]])
updates = tf.constant([2, 3, 4, 5])
sub = tf.scatter_nd_div(ref, indices, updates)
with tf.Session() as sess:
print sess.run(sub)
The resulting update to ref would look like this:
[10, 5, 30, 13, 25, 60, 70, 16]
See [tf.scatter_nd](#scatter_nd) for more details about how to make updates to slices.
##### Args:
* <b>`ref`</b>: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
A mutable Tensor. Should be from a Variable node.
* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
A Tensor. Must be one of the following types: int32, int64. A tensor of indices into ref.
* <b>`updates`</b>: A `Tensor`. Must have the same type as `ref`.
A Tensor. Must have the same type as ref. A tensor of updated values to subtract from ref.
* <b>`use_locking`</b>: An optional `bool`. Defaults to `False`.
An optional bool. Defaults to True. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
* <b>`name`</b>: A name for the operation (optional).
##### Returns:
A mutable `Tensor`. Has the same type as `ref`.
Same as ref. Returned as a convenience for operations that want to use the updated values after the update is done.
### `tf.scatter_nd(indices, updates, shape, name=None)` {#scatter_nd}
Creates a new tensor by applying sparse `updates` to individual values or slices within a zero tensor of the given `shape` tensor according to indices.
This operator is the inverse of the [tf.gather_nd](#gather_nd) operator which extracts values or slices from a given tensor.
TODO(simister): Add a link to Variable.__getitem__ documentation on slice syntax.
`shape` is a `TensorShape` with rank `P` and `indices` is a `Tensor` of rank `Q`.
`indices` must be integer tensor, containing indices into `shape`.
It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`.
The innermost dimension of `indices` (with length `K`) corresponds to
indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th
dimension of `shape`.
`updates` is Tensor of rank `Q-1+P-K` with shape:
```
[d_0, ..., d_{Q-2}, shape[K], ..., shape[P-1]].
```
The simplest form of scatter is to insert individual elements in a tensor by index. For example, say we want to insert 4 scattered elements in a rank-1 tensor with 8 elements.
<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../../images/ScatterNd1.png" alt>
</div>
In Python, this scatter operation would look like this:
indices = tf.constant([[4], [3], [1], [7]])
updates = tf.constant([9, 10, 11, 12])
shape = tf.constant([8])
scatter = tf.scatter_nd(indices, updates, shape)
with tf.Session() as sess:
print sess.run(scatter)
The resulting tensor would look like this:
[0, 11, 0, 10, 9, 0, 0, 12]
We can also, insert entire slices of a higher rank tensor all at once. For example, if we wanted to insert two slices in the first dimension of a rank-3 tensor with two matrices of new values.
<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../../images/ScatterNd2.png" alt>
</div>
In Python, this scatter operation would look like this:
indices = tf.constant([[0], [2]])
updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6],
[7, 7, 7, 7], [8, 8, 8, 8]],
[[5, 5, 5, 5], [6, 6, 6, 6],
[7, 7, 7, 7], [8, 8, 8, 8]]])
shape = tf.constant([4, 4, 4])
scatter = tf.scatter_nd(indices, updates, shape)
with tf.Session() as sess:
print sess.run(scatter)
The resulting tensor would look like this:
[[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]],
[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]]
##### Args:
* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
A Tensor. Must be one of the following types: int32, int64. A tensor of indices into ref.
* <b>`updates`</b>: A `Tensor`.
A Tensor. Must have the same type as tensor. A tensor of updated values to store in ref.
* <b>`shape`</b>: A `Tensor`. Must have the same type as `indices`.
A vector. The shape of the resulting tensor.
* <b>`name`</b>: A name for the operation (optional).
##### Returns:
A `Tensor`. Has the same type as `updates`.
A new tensor with the given shape and updates applied according to the indices.
### `tf.scatter_nd_update(ref, indices, updates, use_locking=None, name=None)` {#scatter_nd_update}
Applies sparse `updates` to individual values or slices within a given variable according to `indices`.
`ref` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`.
`indices` must be integer tensor, containing indices into `ref`.
It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`.
The innermost dimension of `indices` (with length `K`) corresponds to
indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th
dimension of `ref`.
`updates` is `Tensor` of rank `Q-1+P-K` with shape:
```
[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].
```
For example, say we want to update 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that update would look like this:
ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])
indices = tf.constant([[4], [3], [1] ,[7]])
updates = tf.constant([9, 10, 11, 12])
update = tf.scatter_nd_update(ref, indices, updates)
with tf.Session() as sess:
print sess.run(update)
The resulting update to ref would look like this:
[1, 11, 3, 10, 9, 6, 7, 12]
See [tf.scatter_nd](#scatter_nd) for more details about how to make updates to slices.
##### Args:
* <b>`ref`</b>: A mutable `Tensor`. A mutable Tensor. Should be from a Variable node.
* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
A Tensor. Must be one of the following types: int32, int64. A tensor of indices into ref.
* <b>`updates`</b>: A `Tensor`. Must have the same type as `ref`.
A Tensor. Must have the same type as ref. A tensor of updated values to add to ref.
* <b>`use_locking`</b>: An optional `bool`. Defaults to `True`.
An optional bool. Defaults to True. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
* <b>`name`</b>: A name for the operation (optional).
##### Returns:
A mutable `Tensor`. Has the same type as `ref`.
Same as ref. Returned as a convenience for operations that want to use the updated values after the update is done.
### `tf.scatter_nd_mul(ref, indices, updates, use_locking=None, name=None)` {#scatter_nd_mul}
Applies sparse subtraction between `updates` and individual values or slices within a given variable according to `indices`.
`ref` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`.
`indices` must be integer tensor, containing indices into `ref`.
It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`.
The innermost dimension of `indices` (with length `K`) corresponds to
indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th
dimension of `ref`.
`updates` is `Tensor` of rank `Q-1+P-K` with shape:
```
[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].
```
For example, say we want to multiply 4 scattered elements with a rank-1 tensor with 8 elements. In Python, that multiplication would look like this:
ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])
indices = tf.constant([[4], [3], [1], [7]])
updates = tf.constant([9, 10, 11, 12])
sub = tf.scatter_nd_mul(ref, indices, updates)
with tf.Session() as sess:
print sess.run(sub)
The resulting update to ref would look like this:
[1, 22, 3, 40, 45, 6, 7, 96]
See [tf.scatter_nd](#scatter_nd) for more details about how to make updates to slices.
##### Args:
* <b>`ref`</b>: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
A mutable Tensor. Should be from a Variable node.
* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
A Tensor. Must be one of the following types: int32, int64. A tensor of indices into ref.
* <b>`updates`</b>: A `Tensor`. Must have the same type as `ref`.
A Tensor. Must have the same type as ref. A tensor of updated values to subtract from ref.
* <b>`use_locking`</b>: An optional `bool`. Defaults to `False`.
An optional bool. Defaults to True. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
* <b>`name`</b>: A name for the operation (optional).
##### Returns:
A mutable `Tensor`. Has the same type as `ref`.
Same as ref. Returned as a convenience for operations that want to use the updated values after the update is done.
......@@ -107,6 +107,11 @@
* [`scatter_add`](../../api_docs/python/state_ops.md#scatter_add)
* [`scatter_div`](../../api_docs/python/state_ops.md#scatter_div)
* [`scatter_mul`](../../api_docs/python/state_ops.md#scatter_mul)
* [`scatter_nd_add`](../../api_docs/python/state_ops.md#scatter_nd_add)
* [`scatter_nd_div`](../../api_docs/python/state_ops.md#scatter_nd_div)
* [`scatter_nd_mul`](../../api_docs/python/state_ops.md#scatter_nd_mul)
* [`scatter_nd_sub`](../../api_docs/python/state_ops.md#scatter_nd_sub)
* [`scatter_nd_update`](../../api_docs/python/state_ops.md#scatter_nd_update)
* [`scatter_sub`](../../api_docs/python/state_ops.md#scatter_sub)
* [`scatter_update`](../../api_docs/python/state_ops.md#scatter_update)
* [`sparse_mask`](../../api_docs/python/state_ops.md#sparse_mask)
......@@ -148,6 +153,7 @@
* [`reverse`](../../api_docs/python/array_ops.md#reverse)
* [`reverse_sequence`](../../api_docs/python/array_ops.md#reverse_sequence)
* [`saturate_cast`](../../api_docs/python/array_ops.md#saturate_cast)
* [`scatter_nd`](../../api_docs/python/array_ops.md#scatter_nd)
* [`sequence_mask`](../../api_docs/python/array_ops.md#sequence_mask)
* [`setdiff1d`](../../api_docs/python/array_ops.md#setdiff1d)
* [`shape`](../../api_docs/python/array_ops.md#shape)
......
......@@ -2942,6 +2942,280 @@ Requires `updates.shape = indices.shape + ref.shape[1:]`.
to use the updated values after the update is done.
- - -
### `tf.scatter_nd_update(ref, indices, updates, use_locking=None, name=None)` {#scatter_nd_update}
Applies sparse `updates` to individual values or slices within a given variable according to `indices`.
`ref` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`.
`indices` must be integer tensor, containing indices into `ref`.
It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`.
The innermost dimension of `indices` (with length `K`) corresponds to
indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th
dimension of `ref`.
`updates` is `Tensor` of rank `Q-1+P-K` with shape:
```
[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].
```
For example, say we want to update 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that update would look like this:
ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])
indices = tf.constant([[4], [3], [1] ,[7]])
updates = tf.constant([9, 10, 11, 12])
update = tf.scatter_nd_update(ref, indices, updates)
with tf.Session() as sess:
print sess.run(update)
The resulting update to ref would look like this:
[1, 11, 3, 10, 9, 6, 7, 12]
See [tf.scatter_nd](#scatter_nd) for more details about how to make updates to slices.
##### Args:
* <b>`ref`</b>: A mutable `Tensor`. A mutable Tensor. Should be from a Variable node.
* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
A Tensor. Must be one of the following types: int32, int64. A tensor of indices into ref.
* <b>`updates`</b>: A `Tensor`. Must have the same type as `ref`.
A Tensor. Must have the same type as ref. A tensor of updated values to add to ref.
* <b>`use_locking`</b>: An optional `bool`. Defaults to `True`.
An optional bool. Defaults to True. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
* <b>`name`</b>: A name for the operation (optional).
##### Returns:
A mutable `Tensor`. Has the same type as `ref`.
Same as ref. Returned as a convenience for operations that want to use the updated values after the update is done.
- - -
### `tf.scatter_nd_add(ref, indices, updates, use_locking=None, name=None)` {#scatter_nd_add}
Applies sparse addition between `updates` and individual values or slices within a given variable according to `indices`.
`ref` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`.
`indices` must be integer tensor, containing indices into `ref`.
It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`.
The innermost dimension of `indices` (with length `K`) corresponds to
indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th
dimension of `ref`.
`updates` is `Tensor` of rank `Q-1+P-K` with shape:
```
[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].
```
For example, say we want to add 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that addition would look like this:
ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])
indices = tf.constant([[4], [3], [1], [7]])
updates = tf.constant([9, 10, 11, 12])
add = tf.scatter_nd_add(ref, indices, updates)
with tf.Session() as sess:
print sess.run(add)
The resulting update to ref would look like this:
[1, 13, 3, 14, 14, 6, 7, 20]
See [tf.scatter_nd](#scatter_nd) for more details about how to make updates to slices.
##### Args:
* <b>`ref`</b>: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
A mutable Tensor. Should be from a Variable node.
* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
A Tensor. Must be one of the following types: int32, int64. A tensor of indices into ref.
* <b>`updates`</b>: A `Tensor`. Must have the same type as `ref`.
A Tensor. Must have the same type as ref. A tensor of updated values to add to ref.
* <b>`use_locking`</b>: An optional `bool`. Defaults to `False`.
An optional bool. Defaults to True. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
* <b>`name`</b>: A name for the operation (optional).
##### Returns:
A mutable `Tensor`. Has the same type as `ref`.
Same as ref. Returned as a convenience for operations that want to use the updated values after the update is done.
- - -
### `tf.scatter_nd_sub(ref, indices, updates, use_locking=None, name=None)` {#scatter_nd_sub}
Applies sparse subtraction between `updates` and individual values or slices within a given variable according to `indices`.
`ref` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`.
`indices` must be integer tensor, containing indices into `ref`.
It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`.
The innermost dimension of `indices` (with length `K`) corresponds to
indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th
dimension of `ref`.
`updates` is `Tensor` of rank `Q-1+P-K` with shape:
```
[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].
```
For example, say we want to subtract 4 scattered elements from a rank-1 tensor with 8 elements. In Python, that subtraction would look like this:
ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])
indices = tf.constant([[4], [3], [1], [7]])
updates = tf.constant([9, 10, 11, 12])
sub = tf.scatter_nd_sub(ref, indices, updates)
with tf.Session() as sess:
print sess.run(sub)
The resulting update to ref would look like this:
[1, -9, 3, -6, -4, 6, 7, -4]
See [tf.scatter_nd](#scatter_nd) for more details about how to make updates to slices.
##### Args:
* <b>`ref`</b>: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
A mutable Tensor. Should be from a Variable node.
* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
A Tensor. Must be one of the following types: int32, int64. A tensor of indices into ref.
* <b>`updates`</b>: A `Tensor`. Must have the same type as `ref`.
A Tensor. Must have the same type as ref. A tensor of updated values to subtract from ref.
* <b>`use_locking`</b>: An optional `bool`. Defaults to `False`.
An optional bool. Defaults to True. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
* <b>`name`</b>: A name for the operation (optional).
##### Returns:
A mutable `Tensor`. Has the same type as `ref`.
Same as ref. Returned as a convenience for operations that want to use the updated values after the update is done.
- - -
### `tf.scatter_nd_mul(ref, indices, updates, use_locking=None, name=None)` {#scatter_nd_mul}
Applies sparse subtraction between `updates` and individual values or slices within a given variable according to `indices`.
`ref` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`.
`indices` must be integer tensor, containing indices into `ref`.
It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`.
The innermost dimension of `indices` (with length `K`) corresponds to
indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th
dimension of `ref`.
`updates` is `Tensor` of rank `Q-1+P-K` with shape:
```
[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].
```
For example, say we want to multiply 4 scattered elements with a rank-1 tensor with 8 elements. In Python, that multiplication would look like this:
ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])
indices = tf.constant([[4], [3], [1], [7]])
updates = tf.constant([9, 10, 11, 12])
sub = tf.scatter_nd_mul(ref, indices, updates)
with tf.Session() as sess:
print sess.run(sub)
The resulting update to ref would look like this:
[1, 22, 3, 40, 45, 6, 7, 96]
See [tf.scatter_nd](#scatter_nd) for more details about how to make updates to slices.
##### Args:
* <b>`ref`</b>: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
A mutable Tensor. Should be from a Variable node.
* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
A Tensor. Must be one of the following types: int32, int64. A tensor of indices into ref.
* <b>`updates`</b>: A `Tensor`. Must have the same type as `ref`.
A Tensor. Must have the same type as ref. A tensor of updated values to subtract from ref.
* <b>`use_locking`</b>: An optional `bool`. Defaults to `False`.
An optional bool. Defaults to True. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
* <b>`name`</b>: A name for the operation (optional).
##### Returns:
A mutable `Tensor`. Has the same type as `ref`.
Same as ref. Returned as a convenience for operations that want to use the updated values after the update is done.
- - -
### `tf.scatter_nd_div(ref, indices, updates, use_locking=None, name=None)` {#scatter_nd_div}
Applies sparse subtraction between `updates` and individual values or slices within a given variable according to `indices`.
`ref` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`.
`indices` must be integer tensor, containing indices into `ref`.
It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`.
The innermost dimension of `indices` (with length `K`) corresponds to
indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th
dimension of `ref`.
`updates` is `Tensor` of rank `Q-1+P-K` with shape:
```
[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].
```
For example, say we want to divide a rank-1 tensor with 8 elements by 4 scattered elements. In Python, that division would look like this:
ref = tf.Variable([10, 20, 30, 40, 50, 60, 70, 80])
indices = tf.constant([[4], [3], [1], [7]])
updates = tf.constant([2, 3, 4, 5])
sub = tf.scatter_nd_div(ref, indices, updates)
with tf.Session() as sess:
print sess.run(sub)
The resulting update to ref would look like this:
[10, 5, 30, 13, 25, 60, 70, 16]
See [tf.scatter_nd](#scatter_nd) for more details about how to make updates to slices.
##### Args:
* <b>`ref`</b>: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
A mutable Tensor. Should be from a Variable node.
* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
A Tensor. Must be one of the following types: int32, int64. A tensor of indices into ref.
* <b>`updates`</b>: A `Tensor`. Must have the same type as `ref`.
A Tensor. Must have the same type as ref. A tensor of updated values to subtract from ref.
* <b>`use_locking`</b>: An optional `bool`. Defaults to `False`.
An optional bool. Defaults to True. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
* <b>`name`</b>: A name for the operation (optional).
##### Returns:
A mutable `Tensor`. Has the same type as `ref`.
Same as ref. Returned as a convenience for operations that want to use the updated values after the update is done.
- - -
### `tf.sparse_mask(a, mask_indices, name=None)` {#sparse_mask}
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册