提交 4fb692b8 编写于 作者: A Andrew Selle 提交者: Amit Patankar

Documentation changes to adhere to new doc generator

Change: 147382677
上级 6636d794
......@@ -12,33 +12,16 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Linear algebra libraries for TensorFlow.
## `LinearOperator`
Subclasses of `LinearOperator` provide a access to common methods on a
(batch) matrix, without the need to materialize the matrix. This allows:
* Matrix free computations
* Different operators to take advantage of special strcture, while providing a
consistent API to users.
### Base class
"""Linear algebra libraries. See the @{$python/contrib.linalg} guide.
@@LinearOperator
### Individual operators
@@LinearOperatorDiag
@@LinearOperatorIdentity
@@LinearOperatorScaledIdentity
@@LinearOperatorMatrix
@@LinearOperatorTriL
### Transformations and Combinations of operators
@@LinearOperatorUDVHUpdate
@@LinearOperatorComposition
"""
from __future__ import absolute_import
from __future__ import division
......
......@@ -13,7 +13,7 @@
# limitations under the License.
# ==============================================================================
"""Ops for building neural network losses."""
"""Ops for building neural network losses. See @{$python/contrib.losses}."""
from __future__ import absolute_import
from __future__ import division
......
......@@ -12,90 +12,9 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""##Ops for evaluation metrics and summary statistics.
"""Ops for evaluation metrics and summary statistics.
### API
This module provides functions for computing streaming metrics: metrics computed
on dynamically valued `Tensors`. Each metric declaration returns a
"value_tensor", an idempotent operation that returns the current value of the
metric, and an "update_op", an operation that accumulates the information
from the current value of the `Tensors` being measured as well as returns the
value of the "value_tensor".
To use any of these metrics, one need only declare the metric, call `update_op`
repeatedly to accumulate data over the desired number of `Tensor` values (often
each one is a single batch) and finally evaluate the value_tensor. For example,
to use the `streaming_mean`:
```python
value = ...
mean_value, update_op = tf.contrib.metrics.streaming_mean(values)
sess.run(tf.local_variables_initializer())
for i in range(number_of_batches):
print('Mean after batch %d: %f' % (i, update_op.eval())
print('Final Mean: %f' % mean_value.eval())
```
Each metric function adds nodes to the graph that hold the state necessary to
compute the value of the metric as well as a set of operations that actually
perform the computation. Every metric evaluation is composed of three steps
* Initialization: initializing the metric state.
* Aggregation: updating the values of the metric state.
* Finalization: computing the final metric value.
In the above example, calling streaming_mean creates a pair of state variables
that will contain (1) the running sum and (2) the count of the number of samples
in the sum. Because the streaming metrics use local variables,
the Initialization stage is performed by running the op returned
by `tf.local_variables_initializer()`. It sets the sum and count variables to
zero.
Next, Aggregation is performed by examining the current state of `values`
and incrementing the state variables appropriately. This step is executed by
running the `update_op` returned by the metric.
Finally, finalization is performed by evaluating the "value_tensor"
In practice, we commonly want to evaluate across many batches and multiple
metrics. To do so, we need only run the metric computation operations multiple
times:
```python
labels = ...
predictions = ...
accuracy, update_op_acc = tf.contrib.metrics.streaming_accuracy(
labels, predictions)
error, update_op_error = tf.contrib.metrics.streaming_mean_absolute_error(
labels, predictions)
sess.run(tf.local_variables_initializer())
for batch in range(num_batches):
sess.run([update_op_acc, update_op_error])
accuracy, mean_absolute_error = sess.run([accuracy, mean_absolute_error])
```
Note that when evaluating the same metric multiple times on different inputs,
one must specify the scope of each metric to avoid accumulating the results
together:
```python
labels = ...
predictions0 = ...
predictions1 = ...
accuracy0 = tf.contrib.metrics.accuracy(labels, predictions0, name='preds0')
accuracy1 = tf.contrib.metrics.accuracy(labels, predictions1, name='preds1')
```
Certain metrics, such as streaming_mean or streaming_accuracy, can be weighted
via a `weights` argument. The `weights` tensor must be the same size as the
labels and predictions tensors and results in a weighted average of the metric.
## Metric `Ops`
See the @{$python/contrib.metrics} guide.
@@streaming_accuracy
@@streaming_mean
......@@ -130,18 +49,11 @@ labels and predictions tensors and results in a weighted average of the metric.
@@streaming_true_negatives_at_thresholds
@@streaming_true_positives
@@streaming_true_positives_at_thresholds
@@auc_using_histogram
@@accuracy
@@aggregate_metrics
@@aggregate_metric_map
@@confusion_matrix
## Set `Ops`
@@set_difference
@@set_intersection
@@set_size
......
......@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""opt: A module containing optimization routines."""
"""A module containing optimization routines."""
from __future__ import absolute_import
from __future__ import division
......
......@@ -12,39 +12,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Optimizer that computes a moving average of the variables.
Empirically it has been found that using the moving average of the trained
parameters of a deep network is better than using its trained parameters
directly. This optimizer allows you to compute this moving average and swap the
variables at save time so that any code outside of the training loop will use by
default the averaged values instead of the original ones.
Example of usage:
```python
// Encapsulate your favorite optimizer (here the momentum one)
// inside the MovingAverageOptimizer.
opt = tf.train.MomentumOptimizer(learning_rate, FLAGS.momentum)
opt = tf.contrib.opt.MovingAverageOptimizer(opt)
// Then create your model and all its variables.
model = build_model()
// Add the training op that optimizes using opt.
// This needs to be called before swapping_saver().
opt.minimize(cost, var_list)
// Then create your saver like this:
saver = opt.swapping_saver()
// Pass it to your training loop.
slim.learning.train(
model,
...
saver=saver)
```
Note that for evaluation, the normal saver should be used instead of
swapping_saver().
"""
"""Moving average optimizer."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
......@@ -60,7 +29,39 @@ from tensorflow.python.training import saver
class MovingAverageOptimizer(optimizer.Optimizer):
"""Optimizer wrapper that maintains a moving average of parameters."""
"""Optimizer that computes a moving average of the variables.
Empirically it has been found that using the moving average of the trained
parameters of a deep network is better than using its trained parameters
directly. This optimizer allows you to compute this moving average and swap
the variables at save time so that any code outside of the training loop will
use by default the averaged values instead of the original ones.
Example of usage:
```python
// Encapsulate your favorite optimizer (here the momentum one)
// inside the MovingAverageOptimizer.
opt = tf.train.MomentumOptimizer(learning_rate, FLAGS.momentum)
opt = tf.contrib.opt.MovingAverageOptimizer(opt)
// Then create your model and all its variables.
model = build_model()
// Add the training op that optimizes using opt.
// This needs to be called before swapping_saver().
opt.minimize(cost, var_list)
// Then create your saver like this:
saver = opt.swapping_saver()
// Pass it to your training loop.
slim.learning.train(
model,
...
saver=saver)
```
Note that for evaluation, the normal saver should be used instead of
swapping_saver().
"""
def __init__(self, opt, average_decay=0.9999, num_updates=None,
sequential_update=True):
......
......@@ -12,57 +12,34 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Module for constructing RNN Cells and additional RNN operations.
## Base interface for all RNN Cells
"""RNN Cells and additional RNN operations. See @{$python/contrib.rnn} guide.
@@RNNCell
## RNN Cells for use with TensorFlow's core RNN methods
@@BasicRNNCell
@@BasicLSTMCell
@@GRUCell
@@LSTMCell
@@LayerNormBasicLSTMCell
## Classes storing split `RNNCell` state
@@LSTMStateTuple
## RNN Cell wrappers (RNNCells that wrap other RNNCells)
@@MultiRNNCell
@@LSTMBlockWrapper
@@DropoutWrapper
@@EmbeddingWrapper
@@InputProjectionWrapper
@@OutputProjectionWrapper
### Block RNNCells
@@DeviceWrapper
@@ResidualWrapper
@@LSTMBlockCell
@@GRUBlockCell
### Fused RNNCells
@@FusedRNNCell
@@FusedRNNCellAdaptor
@@TimeReversedFusedRNN
@@LSTMBlockFusedCell
### LSTM-like cells
@@CoupledInputForgetGateLSTMCell
@@TimeFreqLSTMCell
@@GridLSTMCell
### RNNCell wrappers
@@AttentionCellWrapper
## Recurrent Neural Networks
TensorFlow provides a number of methods for constructing Recurrent Neural
Networks.
@@CompiledWrapper
@@static_rnn
@@static_state_saving_rnn
@@static_bidirectional_rnn
......
......@@ -14,157 +14,50 @@
# ==============================================================================
# pylint: disable=g-short-docstring-punctuation
"""## Encoding and Decoding
TensorFlow provides Ops to decode and encode JPEG and PNG formats. Encoded
images are represented by scalar string Tensors, decoded images by 3-D uint8
tensors of shape `[height, width, channels]`. (PNG also supports uint16.)
The encode and decode Ops apply to one image at a time. Their input and output
are all of variable size. If you need fixed size images, pass the output of
the decode Ops to one of the cropping and resizing Ops.
Note: The PNG encode and decode Ops support RGBA, but the conversions Ops
presently only support RGB, HSV, and GrayScale. Presently, the alpha channel has
to be stripped from the image and re-attached using slicing ops.
"""Image processing and decoding ops. See the @{$python/image} guide.
@@decode_gif
@@decode_jpeg
@@encode_jpeg
@@decode_png
@@encode_png
@@decode_image
## Resizing
The resizing Ops accept input images as tensors of several types. They always
output resized images as float32 tensors.
The convenience function [`resize_images()`](#resize_images) supports both 4-D
and 3-D tensors as input and output. 4-D tensors are for batches of images,
3-D tensors for individual images.
Other resizing Ops only support 4-D batches of images as input:
[`resize_area`](#resize_area), [`resize_bicubic`](#resize_bicubic),
[`resize_bilinear`](#resize_bilinear),
[`resize_nearest_neighbor`](#resize_nearest_neighbor).
Example:
```python
# Decode a JPG image and resize it to 299 by 299 using default method.
image = tf.image.decode_jpeg(...)
resized_image = tf.image.resize_images(image, [299, 299])
```
@@resize_images
@@resize_area
@@resize_bicubic
@@resize_bilinear
@@resize_nearest_neighbor
## Cropping
@@resize_image_with_crop_or_pad
@@central_crop
@@pad_to_bounding_box
@@crop_to_bounding_box
@@extract_glimpse
@@crop_and_resize
## Flipping, Rotating and Transposing
@@flip_up_down
@@random_flip_up_down
@@flip_left_right
@@random_flip_left_right
@@transpose_image
@@rot90
## Converting Between Colorspaces.
Image ops work either on individual images or on batches of images, depending on
the shape of their input Tensor.
If 3-D, the shape is `[height, width, channels]`, and the Tensor represents one
image. If 4-D, the shape is `[batch_size, height, width, channels]`, and the
Tensor represents `batch_size` images.
Currently, `channels` can usefully be 1, 2, 3, or 4. Single-channel images are
grayscale, images with 3 channels are encoded as either RGB or HSV. Images
with 2 or 4 channels include an alpha channel, which has to be stripped from the
image before passing the image to most image processing functions (and can be
re-attached later).
Internally, images are either stored in as one `float32` per channel per pixel
(implicitly, values are assumed to lie in `[0,1)`) or one `uint8` per channel
per pixel (values are assumed to lie in `[0,255]`).
TensorFlow can convert between images in RGB or HSV. The conversion functions
work only on float images, so you need to convert images in other formats using
[`convert_image_dtype`](#convert-image-dtype).
Example:
```python
# Decode an image and convert it to HSV.
rgb_image = tf.image.decode_png(..., channels=3)
rgb_image_float = tf.image.convert_image_dtype(rgb_image, tf.float32)
hsv_image = tf.image.rgb_to_hsv(rgb_image)
```
@@rgb_to_grayscale
@@grayscale_to_rgb
@@hsv_to_rgb
@@rgb_to_hsv
@@convert_image_dtype
## Image Adjustments
TensorFlow provides functions to adjust images in various ways: brightness,
contrast, hue, and saturation. Each adjustment can be done with predefined
parameters or with random parameters picked from predefined intervals. Random
adjustments are often useful to expand a training set and reduce overfitting.
If several adjustments are chained it is advisable to minimize the number of
redundant conversions by first converting the images to the most natural data
type and representation (RGB or HSV).
@@adjust_brightness
@@random_brightness
@@adjust_contrast
@@random_contrast
@@adjust_hue
@@random_hue
@@adjust_gamma
@@adjust_saturation
@@random_saturation
@@per_image_standardization
## Working with Bounding Boxes
@@draw_bounding_boxes
@@non_max_suppression
@@sample_distorted_bounding_box
## Denoising
@@total_variation
"""
from __future__ import absolute_import
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册