提交 1d3874fe 编写于 作者: M Manjunath Kudlur

TensorFlow: Upstream changes to git.

Changes:
- Docuementation changes.
- Update URL for protobuf submodule.

Base CL: 107345722
上级 71842dab
[submodule "google/protobuf"]
path = google/protobuf
url = https://github.googlesource.com/google/protobuf.git
url = https://github.com/google/protobuf.git
......@@ -26,25 +26,25 @@ write the graph to a file.
##Classes <a class="md-anchor" id="AUTOGENERATED-classes"></a>
* [tensorflow::Env](ClassEnv.md)
* [tensorflow::EnvWrapper](ClassEnvWrapper.md)
* [tensorflow::RandomAccessFile](ClassRandomAccessFile.md)
* [tensorflow::Session](ClassSession.md)
* [tensorflow::Status](ClassStatus.md)
* [tensorflow::Tensor](ClassTensor.md)
* [tensorflow::TensorBuffer](ClassTensorBuffer.md)
* [tensorflow::TensorShape](ClassTensorShape.md)
* [tensorflow::TensorShapeIter](ClassTensorShapeIter.md)
* [tensorflow::TensorShapeUtils](ClassTensorShapeUtils.md)
* [tensorflow::Thread](ClassThread.md)
* [tensorflow::WritableFile](ClassWritableFile.md)
* [tensorflow::Env](../../api_docs/cc/ClassEnv.md)
* [tensorflow::EnvWrapper](../../api_docs/cc/ClassEnvWrapper.md)
* [tensorflow::RandomAccessFile](../../api_docs/cc/ClassRandomAccessFile.md)
* [tensorflow::Session](../../api_docs/cc/ClassSession.md)
* [tensorflow::Status](../../api_docs/cc/ClassStatus.md)
* [tensorflow::Tensor](../../api_docs/cc/ClassTensor.md)
* [tensorflow::TensorBuffer](../../api_docs/cc/ClassTensorBuffer.md)
* [tensorflow::TensorShape](../../api_docs/cc/ClassTensorShape.md)
* [tensorflow::TensorShapeIter](../../api_docs/cc/ClassTensorShapeIter.md)
* [tensorflow::TensorShapeUtils](../../api_docs/cc/ClassTensorShapeUtils.md)
* [tensorflow::Thread](../../api_docs/cc/ClassThread.md)
* [tensorflow::WritableFile](../../api_docs/cc/ClassWritableFile.md)
##Structs <a class="md-anchor" id="AUTOGENERATED-structs"></a>
* [tensorflow::SessionOptions](StructSessionOptions.md)
* [tensorflow::Status::State](StructState.md)
* [tensorflow::TensorShapeDim](StructTensorShapeDim.md)
* [tensorflow::ThreadOptions](StructThreadOptions.md)
* [tensorflow::SessionOptions](../../api_docs/cc/StructSessionOptions.md)
* [tensorflow::Status::State](../../api_docs/cc/StructState.md)
* [tensorflow::TensorShapeDim](../../api_docs/cc/StructTensorShapeDim.md)
* [tensorflow::ThreadOptions](../../api_docs/cc/StructThreadOptions.md)
<div class='sections-order' style="display: none;">
......
......@@ -68,8 +68,8 @@ also use MNIST as an example in our technical tutorial where we elaborate on
TensorFlow features.
## Recommended Next Steps: <a class="md-anchor" id="AUTOGENERATED-recommended-next-steps-"></a>
* [Download and Setup](os_setup.md)
* [Basic Usage](basic_usage.md)
* [Download and Setup](../get_started/os_setup.md)
* [Basic Usage](../get_started/basic_usage.md)
* [TensorFlow Mechanics 101](../tutorials/mnist/tf/index.md)
......
......@@ -35,7 +35,7 @@ to:
* [Verify it works](#AUTOGENERATED-verify-it-works)
* [Validation](#Validation)
* [Op registration](#AUTOGENERATED-op-registration)
* [Attrs](#AUTOGENERATED-attrs)
* [Attrs](#Attrs)
* [Attr types](#AUTOGENERATED-attr-types)
* [Polymorphism](#Polymorphism)
* [Inputs and Outputs](#AUTOGENERATED-inputs-and-outputs)
......@@ -51,8 +51,8 @@ to:
You define the interface of an Op by registering it with the TensorFlow system.
In the registration, you specify the name of your Op, its inputs (types and
names) and outputs (types and names), as well as [docstrings](#docstrings) and
any [attrs](#attrs) the Op might require.
names) and outputs (types and names), as well as docstrings and
any [attrs](#Attrs) the Op might require.
To see how this works, suppose you'd like to create an Op that takes a tensor of
`int32`s and outputs a copy of the tensor, with all but the first element set to
......@@ -249,7 +249,7 @@ function on error.
## Op registration <a class="md-anchor" id="AUTOGENERATED-op-registration"></a>
### Attrs <a class="md-anchor" id="AUTOGENERATED-attrs"></a>
### Attrs <a class="md-anchor" id="Attrs"></a>
Ops can have attrs, whose values are set when the Op is added to a graph. These
are used to configure the Op, and their values can be accessed both within the
......
......@@ -5,7 +5,7 @@ TensorFlow computation graphs are powerful but complicated. The graph visualizat
![Visualization of a TensorFlow graph](./graph_vis_animation.gif "Visualization of a TensorFlow graph")
*Visualization of a TensorFlow graph.*
To see your own graph, run TensorBoard pointing it to the log directory of the job, click on the graph tab on the top pane and select the appropriate run using the menu at the upper left corner. For in depth information on how to run TensorBoard and make sure you are logging all the necessary information, see [Summaries and TensorBoard](../summaries_and_tensorboard/index.md).
To see your own graph, run TensorBoard pointing it to the log directory of the job, click on the graph tab on the top pane and select the appropriate run using the menu at the upper left corner. For in depth information on how to run TensorBoard and make sure you are logging all the necessary information, see [Summaries and TensorBoard](../../how_tos/summaries_and_tensorboard/index.md).
## Name scoping and nodes <a class="md-anchor" id="AUTOGENERATED-name-scoping-and-nodes"></a>
......
......@@ -6,7 +6,7 @@
TensorFlow Variables are in-memory buffers containing tensors. Learn how to
use them to hold and update model parameters during training.
[View Tutorial](variables/index.md)
[View Tutorial](../how_tos/variables/index.md)
## TensorFlow Mechanics 101 <a class="md-anchor" id="AUTOGENERATED-tensorflow-mechanics-101"></a>
......@@ -25,7 +25,7 @@ your model(s). This tutorial describes how to build and run TensorBoard as well
as how to add Summary ops to automatically output data to the Events files that
TensorBoard uses for display.
[View Tutorial](summaries_and_tensorboard/index.md)
[View Tutorial](../how_tos/summaries_and_tensorboard/index.md)
## TensorBoard: Graph Visualization <a class="md-anchor" id="AUTOGENERATED-tensorboard--graph-visualization"></a>
......@@ -33,7 +33,7 @@ TensorBoard uses for display.
This tutorial describes how to use the graph visualizer in TensorBoard to help
you understand the dataflow graph and debug it.
[View Tutorial](graph_viz/index.md)
[View Tutorial](../how_tos/graph_viz/index.md)
## Reading Data <a class="md-anchor" id="AUTOGENERATED-reading-data"></a>
......@@ -41,7 +41,7 @@ you understand the dataflow graph and debug it.
This tutorial describes the three main methods of getting data into your
TensorFlow program: Feeding, Reading and Preloading.
[View Tutorial](reading_data/index.md)
[View Tutorial](../how_tos/reading_data/index.md)
## Threading and Queues <a class="md-anchor" id="AUTOGENERATED-threading-and-queues"></a>
......@@ -49,7 +49,7 @@ TensorFlow program: Feeding, Reading and Preloading.
This tutorial describes the various constructs implemented by TensorFlow
to facilitate asynchronous and concurrent training.
[View Tutorial](threading_and_queues/index.md)
[View Tutorial](../how_tos/threading_and_queues/index.md)
## Adding a New Op <a class="md-anchor" id="AUTOGENERATED-adding-a-new-op"></a>
......@@ -57,7 +57,7 @@ to facilitate asynchronous and concurrent training.
TensorFlow already has a large suite of node operations from which you can
compose in your graph, but here are the details of how to add you own custom Op.
[View Tutorial](adding_an_op/index.md)
[View Tutorial](../how_tos/adding_an_op/index.md)
## Custom Data Readers <a class="md-anchor" id="AUTOGENERATED-custom-data-readers"></a>
......@@ -65,14 +65,14 @@ compose in your graph, but here are the details of how to add you own custom Op.
If you have a sizable custom data set, you may want to consider extending
TensorFlow to read your data directly in it's native format. Here's how.
[View Tutorial](new_data_formats/index.md)
[View Tutorial](../how_tos/new_data_formats/index.md)
## Using GPUs <a class="md-anchor" id="AUTOGENERATED-using-gpus"></a>
This tutorial describes how to construct and execute models on GPU(s).
[View Tutorial](using_gpu/index.md)
[View Tutorial](../how_tos/using_gpu/index.md)
## Sharing Variables <a class="md-anchor" id="AUTOGENERATED-sharing-variables"></a>
......@@ -83,7 +83,7 @@ different locations in the model construction code.
The "Variable Scope" mechanism is designed to facilitate that.
[View Tutorial](variable_scope/index.md)
[View Tutorial](../how_tos/variable_scope/index.md)
<div class='sections-order' style="display: none;">
<!--
......
......@@ -94,7 +94,7 @@ helper functions from
without modifying any arguments.
Next you will create the actual Reader op. It will help if you are familiar
with [the adding an op how-to](../adding_an_op/index.md). The main steps
with [the adding an op how-to](../../how_tos/adding_an_op/index.md). The main steps
are:
* Registering the op.
......@@ -122,7 +122,7 @@ REGISTER_OP("TextLineReader")
A Reader that outputs the lines of a file delimited by '\n'.
)doc");
```
To define an `OpKernel`, Readers can use the shortcut of descending from
`ReaderOpKernel`, defined in
[tensorflow/core/framework/reader_op_kernel.h](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/reader_op_kernel.h),
......@@ -199,7 +199,7 @@ You can see some examples in
## Writing an Op for a record format <a class="md-anchor" id="AUTOGENERATED-writing-an-op-for-a-record-format"></a>
Generally this is an ordinary op that takes a scalar string record as input, and
so follow [the instructions to add an Op](../adding_an_op/index.md). You may
so follow [the instructions to add an Op](../../how_tos/adding_an_op/index.md). You may
optionally take a scalar string key as input, and include that in error messages
reporting improperly formatted data. That way users can more easily track down
where the bad data came from.
......
......@@ -266,7 +266,7 @@ This can be important:
How many threads do you need? the `tf.train.shuffle_batch*` functions add a
summary to the graph that indicates how full the example queue is. If you have
enough reading threads, that summary will stay above zero. You can
[view your summaries as training progresses using TensorBoard](../summaries_and_tensorboard/index.md).
[view your summaries as training progresses using TensorBoard](../../how_tos/summaries_and_tensorboard/index.md).
### Creating threads to prefetch using `QueueRunner` objects <a class="md-anchor" id="QueueRunner"></a>
......@@ -355,7 +355,7 @@ threads got an error when running some operation (or an ordinary Python
exception).
For more about threading, queues, QueueRunners, and Coordinators
[see here](../threading_and_queues/index.md).
[see here](../../how_tos/threading_and_queues/index.md).
#### Aside: How clean shut-down when limiting epochs works <a class="md-anchor" id="AUTOGENERATED-aside--how-clean-shut-down-when-limiting-epochs-works"></a>
......@@ -493,4 +493,4 @@ This is what is done in
You can have the train and eval in the same graph in the same process, and share
their trained variables. See
[the shared variables tutorial](../variable_scope/index.md).
[the shared variables tutorial](../../how_tos/variable_scope/index.md).
......@@ -105,4 +105,4 @@ not contain any data relevant to that tab, a message will be displayed
indicating how to serialize data that is applicable to that tab.
For in depth information on how to use the *graph* tab to visualize your graph,
see [TensorBoard: Visualizing your graph](../graph_viz/index.md).
see [TensorBoard: Visualizing your graph](../../how_tos/graph_viz/index.md).
# Sharing Variables <a class="md-anchor" id="AUTOGENERATED-sharing-variables"></a>
You can create, initialize, save and load single variables
in the way described in the [Variables HowTo](../variables/index.md).
in the way described in the [Variables HowTo](../../how_tos/variables/index.md).
But when building complex models you often need to share large sets of
variables and you might want to initialize all of them in one place.
This tutorial shows how this can be done using `tf.variable_scope()` and
......@@ -12,7 +12,7 @@ the `tf.get_variable()`.
Imagine you create a simple model for image filters, similar to our
[Convolutional Neural Networks Tutorial](../../tutorials/deep_cnn/index.md)
model but with only 2 convolutions (for simplicity of this example). If you use
just `tf.Variable`, as explained in [Variables HowTo](../variables/index.md),
just `tf.Variable`, as explained in [Variables HowTo](../../how_tos/variables/index.md),
your model might look like this.
```python
......
# BibTex Citation <a class="md-anchor" id="AUTOGENERATED-bibtex-citation"></a>
If you use TensorFlow in your research and would like to cite the TensorFlow
system, we suggest you cite the following whitepaper:
```
@misc{tensorflow2015-whitepaper,
title={{TensorFlow}: Large-Scale Machine Learning on Heterogeneous Systems},
......@@ -11,7 +14,7 @@ author={
Eugene~Brevdo and
Zhifeng~Chen and
Craig~Citro and
Greg~S~Corrado and
Greg~S.~Corrado and
Andy~Davis and
Jeffrey~Dean and
Matthieu~Devin and
......@@ -21,6 +24,7 @@ author={
Geoffrey~Irving and
Michael~Isard and
Yangqing Jia and
Rafal~Jozefowicz and
Lukasz~Kaiser and
Manjunath~Kudlur and
Josh~Levenberg and
......@@ -29,6 +33,7 @@ author={
Sherry~Moore and
Derek~Murray and
Chris~Olah and
Mike~Schuster and
Jonathon~Shlens and
Benoit~Steiner and
Ilya~Sutskever and
......@@ -36,13 +41,30 @@ author={
Paul~Tucker and
Vincent~Vanhoucke and
Vijay~Vasudevan and
Fernanda~Vi\'{e}gas,
Fernanda~Vi\'{e}gas and
Oriol~Vinyals and
Pete~Warden and
Martin~Wattenberg,
Martin~Wattenberg and
Martin~Wicke and
Yuan~Yu and
Xiaoqiang~Zheng},
year={2015},
}
```
In textual form:
```
Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo,
Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis,
Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow,
Andrew Harp, Geoffrey Irving, Michael Isard, Rafal Jozefowicz, Yangqing Jia,
Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Mike Schuster,
Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Jonathon Shlens,
Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker,
Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas,
Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke,
Yuan Yu, and Xiaoqiang Zheng.
TensorFlow: Large-scale machine learning on heterogeneous systems,
2015. Software available from tensorflow.org.
```
......@@ -2,7 +2,7 @@
This document provides answers to some of the frequently asked questions about
TensorFlow. If you have a question that is not covered here, you might find an
answer on one of the TensorFlow [community resources](index.md).
answer on one of the TensorFlow [community resources](../resources/index.md).
<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
## Contents
......@@ -54,7 +54,7 @@ uses multiple GPUs.
#### What are the different types of tensors that are available? <a class="md-anchor" id="AUTOGENERATED-what-are-the-different-types-of-tensors-that-are-available-"></a>
TensorFlow supports a variety of different data types and tensor shapes. See the
[ranks, shapes, and types reference](dims_types.md) for more details.
[ranks, shapes, and types reference](../resources/dims_types.md) for more details.
## Running a TensorFlow computation <a class="md-anchor" id="AUTOGENERATED-running-a-tensorflow-computation"></a>
......
......@@ -6,13 +6,13 @@
Additional details about the TensorFlow programming model and the underlying
implementation can be found in out white paper:
* [TensorFlow: Large-scale machine learning on heterogeneous systems](../extras/tensorflow-whitepaper2015.pdf)
* [TensorFlow: Large-scale machine learning on heterogeneous systems](http://tensorflow.org/tensorflow-whitepaper2015.pdf)
### Citation <a class="md-anchor" id="AUTOGENERATED-citation"></a>
If you use TensorFlow in your research and would like to cite the TensorFlow
system, we suggest you cite the paper above.
You can use this [BibTeX entry](bib.md). As the project progresses, we
You can use this [BibTeX entry](../resources/bib.md). As the project progresses, we
may update the suggested citation with new papers.
......
......@@ -105,7 +105,7 @@ adds operations that perform inference, i.e. classification, on supplied images.
add operations that compute the loss,
gradients, variable updates and visualization summaries.
### Model Inputs <a class="md-anchor" id="AUTOGENERATED-model-inputs"></a>
### Model Inputs <a class="md-anchor" id="model-inputs"></a>
The input part of the model is built by the functions `inputs()` and
`distorted_inputs()` which read images from the CIFAR-10 binary data files.
......@@ -143,7 +143,7 @@ processing time. To prevent these operations from slowing down training, we run
them inside 16 separate threads which continuously fill a TensorFlow
[queue](../../api_docs/python/io_ops.md#shuffle_batch).
### Model Prediction <a class="md-anchor" id="AUTOGENERATED-model-prediction"></a>
### Model Prediction <a class="md-anchor" id="model-prediction"></a>
The prediction part of the model is constructed by the `inference()` function
which adds operations to compute the *logits* of the predictions. That part of
......@@ -181,7 +181,7 @@ the CIFAR-10 model specified in
layers are locally connected and not fully connected. Try editing the
architecture to exactly replicate that fully connected model.
### Model Training <a class="md-anchor" id="AUTOGENERATED-model-training"></a>
### Model Training <a class="md-anchor" id="model-training"></a>
The usual method for training a network to perform N-way classification is
[multinomial logistic regression](https://en.wikipedia.org/wiki/Multinomial_logistic_regression),
......@@ -302,7 +302,7 @@ values. See how the scripts use
[ExponentialMovingAverage](../../api_docs/python/train.md#ExponentialMovingAverage)
for this purpose.
## Evaluating a Model <a class="md-anchor" id="AUTOGENERATED-evaluating-a-model"></a>
## Evaluating a Model <a class="md-anchor" id="evaluating-a-model"></a>
Let us now evaluate how well the trained model performs on a hold-out data set.
the model is evaluated by the script `cifar10_eval.py`. It constructs the model
......
......@@ -7,7 +7,7 @@ If you're new to machine learning, we recommend starting here. You'll learn
about a classic problem, handwritten digit classification (MNIST), and get a
gentle introduction to multiclass classification.
[View Tutorial](mnist/beginners/index.md)
[View Tutorial](../tutorials/mnist/beginners/index.md)
## Deep MNIST for Experts <a class="md-anchor" id="AUTOGENERATED-deep-mnist-for-experts"></a>
......@@ -16,7 +16,7 @@ If you're already familiar with other deep learning software packages, and are
already familiar with MNIST, this tutorial with give you a very brief primer on
TensorFlow.
[View Tutorial](mnist/pros/index.md)
[View Tutorial](../tutorials/mnist/pros/index.md)
## TensorFlow Mechanics 101 <a class="md-anchor" id="AUTOGENERATED-tensorflow-mechanics-101"></a>
......@@ -25,7 +25,7 @@ This is a technical tutorial, where we walk you through the details of using
TensorFlow infrastructure to train models at scale. We use again MNIST as the
example.
[View Tutorial](mnist/tf/index.md)
[View Tutorial](../tutorials/mnist/tf/index.md)
## Convolutional Neural Networks <a class="md-anchor" id="AUTOGENERATED-convolutional-neural-networks"></a>
......@@ -35,7 +35,7 @@ Convolutional neural nets are particularly tailored to images, since they
exploit translation invariance to yield more compact and effective
representations of visual content.
[View Tutorial](deep_cnn/index.md)
[View Tutorial](../tutorials/deep_cnn/index.md)
## Vector Representations of Words <a class="md-anchor" id="AUTOGENERATED-vector-representations-of-words"></a>
......@@ -46,7 +46,7 @@ method for learning embeddings. It also covers the high-level details behind
noise-contrastive training methods (the biggest recent advance in training
embeddings).
[View Tutorial](word2vec/index.md)
[View Tutorial](../tutorials/word2vec/index.md)
## Recurrent Neural Networks <a class="md-anchor" id="AUTOGENERATED-recurrent-neural-networks"></a>
......@@ -54,7 +54,7 @@ embeddings).
An introduction to RNNs, wherein we train an LSTM network to predict the next
word in an English sentence. (A task sometimes called language modeling.)
[View Tutorial](recurrent/index.md)
[View Tutorial](../tutorials/recurrent/index.md)
## Sequence-to-Sequence Models <a class="md-anchor" id="AUTOGENERATED-sequence-to-sequence-models"></a>
......@@ -63,7 +63,7 @@ A follow on to the RNN tutorial, where we assemble a sequence-to-sequence model
for machine translation. You will learn to build your own English-to-French
translator, entirely machine learned, end-to-end.
[View Tutorial](seq2seq/index.md)
[View Tutorial](../tutorials/seq2seq/index.md)
## Mandelbrot Set <a class="md-anchor" id="AUTOGENERATED-mandelbrot-set"></a>
......@@ -71,7 +71,7 @@ translator, entirely machine learned, end-to-end.
TensorFlow can be used for computation that has nothing to do with machine
learning. Here's a naive implementation of Mandelbrot set visualization.
[View Tutorial](mandelbrot/index.md)
[View Tutorial](../tutorials/mandelbrot/index.md)
## Partial Differential Equations <a class="md-anchor" id="AUTOGENERATED-partial-differential-equations"></a>
......@@ -79,7 +79,7 @@ learning. Here's a naive implementation of Mandelbrot set visualization.
As another example of non-machine learning computation, we offer an example of
a naive PDE simulation of raindrops landing on a pond.
[View Tutorial](pdes/index.md)
[View Tutorial](../tutorials/pdes/index.md)
## MNIST Data Download <a class="md-anchor" id="AUTOGENERATED-mnist-data-download"></a>
......@@ -87,7 +87,7 @@ a naive PDE simulation of raindrops landing on a pond.
Details about downloading the MNIST handwritten digits data set. Exciting
stuff.
[View Tutorial](mnist/download/index.md)
[View Tutorial](../tutorials/mnist/download/index.md)
## Visual Object Recognition <a class="md-anchor" id="AUTOGENERATED-visual-object-recognition"></a>
......
......@@ -3,7 +3,7 @@
*This tutorial is intended for readers who are new to both machine learning and
TensorFlow. If you already
know what MNIST is, and what softmax (multinomial logistic) regression is,
you might prefer this [faster paced tutorial](../pros/index.md).*
you might prefer this [faster paced tutorial](../../../tutorials/mnist/pros/index.md).*
When one learns how to program, there's a tradition that the first thing you do
is print "Hello World." Just like programming has Hello World, machine learning
......@@ -417,6 +417,6 @@ a look at this
[list of results](http://rodrigob.github.io/are_we_there_yet/build/classification_datasets_results.html).)
What matters is that we learned from this model. Still, if you're feeling a bit
down about these results, check out [the next tutorial](../../index.md) where we
down about these results, check out [the next tutorial](../../../tutorials/index.md) where we
do a lot better, and learn how to build more sophisticated models using
TensorFlow!
......@@ -9,7 +9,7 @@ while constructing a deep convolutional MNIST classifier.
*This introduction assumes familiarity with neural networks and the MNIST
dataset. If you don't have
a background with them, check out the
[introduction for beginners](../beginners/index.md).*
[introduction for beginners](../../../tutorials/mnist/beginners/index.md).*
## Setup <a class="md-anchor" id="AUTOGENERATED-setup"></a>
......
......@@ -56,7 +56,7 @@ Dataset | Purpose
`data_sets.validation` | 5000 images and labels, for iterative validation of training accuracy.
`data_sets.test` | 10000 images and labels, for final testing of trained accuracy.
For more information about the data, please read the [`Download`](../download/index.md)
For more information about the data, please read the [`Download`](../../../tutorials/mnist/download/index.md)
tutorial.
### Inputs and Placeholders <a class="md-anchor" id="AUTOGENERATED-inputs-and-placeholders"></a>
......
......@@ -117,7 +117,7 @@ for current_batch_of_words in words_in_dataset:
### Inputs <a class="md-anchor" id="AUTOGENERATED-inputs"></a>
The word IDs will be embedded into a dense representation (see the
[Vectors Representations Tutorial](../word2vec/index.md)) before feeding to
[Vectors Representations Tutorial](../../tutorials/word2vec/index.md)) before feeding to
the LSTM. This allows the model to efficiently represent the knowledge about
particular words. It is also easy to write:
......
# Sequence-to-Sequence Models <a class="md-anchor" id="AUTOGENERATED-sequence-to-sequence-models"></a>
Recurrent neural networks can learn to model language, as already discussed
in the [RNN Tutorial](../recurrent/index.md)
in the [RNN Tutorial](../../tutorials/recurrent/index.md)
(if you did not read it, please go through it before proceeding with this one).
This raises an interesting question: could we condition the generated words on
some input and generate a meaningful response? For example, could we train
......@@ -45,7 +45,7 @@ This basic architecture is depicted below.
</div>
Each box in the picture above represents a cell of the RNN, most commonly
a GRU cell or an LSTM cell (see the [RNN Tutorial](../recurrent/index.md)
a GRU cell or an LSTM cell (see the [RNN Tutorial](../../tutorials/recurrent/index.md)
for an explanation of those). Encoder and decoder can share weights or,
as is more common, use a different set of parameters. Mutli-layer cells
have been successfully used in sequence-to-sequence models too, e.g. for
......@@ -86,7 +86,7 @@ that determines which cell will be used inside the model. You can use
an existing cell, such as `GRUCell` or `LSTMCell`, or you can write your own.
Moreover, `rnn_cell` provides wrappers to construct multi-layer cells,
add dropout to cell inputs or outputs, or to do other transformations,
see the [RNN Tutorial](../recurrent/index.md) for examples.
see the [RNN Tutorial](../../tutorials/recurrent/index.md) for examples.
The call to `basic_rnn_seq2seq` returns two arguments: `outputs` and `states`.
Both of them are lists of tensors of the same length as `decoder_inputs`.
......@@ -112,7 +112,7 @@ outputs, states = embedding_rnn_seq2seq(
In the `embedding_rnn_seq2seq` model, all inputs (both `encoder_inputs` and
`decoder_inputs`) are integer-tensors that represent discrete values.
They will be embedded into a dense representation (see the
[Vectors Representations Tutorial](../word2vec/index.md) for more details
[Vectors Representations Tutorial](../../tutorials/word2vec/index.md) for more details
on embeddings), but to construct these embeddings we need to specify
the maximum number of discrete symbols that will appear: `num_encoder_symbols`
on the encoder side, and `num_decoder_symbols` on the decoder side.
......
......@@ -66,10 +66,13 @@ class Index(Document):
for filename, library in self._filename_to_library_map:
sorted_names = sorted(library.mentioned, key=str.lower)
member_names = [n for n in sorted_names if n in self._members]
links = ["[`%s`](%s#%s)" % (name, filename, anchor_f(name))
# TODO: This is a hack that should be removed as soon as the website code
# allows it.
full_filename = '../../api_docs/python/' + filename
links = ["[`%s`](%s#%s)" % (name, full_filename, anchor_f(name))
for name in member_names]
if links:
print >>f, "* **[%s](%s)**:" % (library.title, filename)
print >>f, "* **[%s](%s)**:" % (library.title, full_filename)
for link in links:
print >>f, " * %s" % link
print >>f, ""
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册