提交 0764bb4a 编写于 作者: L Luo Tao

add exclude_patterns for conf.py.in

上级 71b42039
......@@ -72,6 +72,7 @@ function( Sphinx_add_target target_name builder conf cache source destination )
${source}
${destination}
COMMENT "Generating sphinx documentation: ${builder}"
COMMAND ln -s ${destination}/index_*.html ${destination}/index.html
)
set_property(
......@@ -143,4 +144,4 @@ function( Sphinx_add_targets target_base_name conf source base_destination )
add_dependencies( ${target_base_name}_linkcheck ${_dependencies} )
endif()
endfunction()
\ No newline at end of file
endfunction()
.. _api_pydataprovider2_en:
.. _api_pydataprovider2:
PyDataProvider2
===============
......@@ -104,7 +104,7 @@ And PaddlePadle will do all of the rest things\:
Is this cool?
.. _api_pydataprovider2_en_sequential_model:
.. _api_pydataprovider2_sequential_model:
DataProvider for the sequential model
-------------------------------------
......
......@@ -23,7 +23,7 @@ python's :code:`help()` function. Let's walk through the above python script:
* At the beginning, use :code:`swig_paddle.initPaddle()` to initialize
PaddlePaddle with command line arguments, for more about command line arguments
see :ref:`cmd_detail_introduction_en` .
see :ref:`cmd_detail_introduction` .
* Parse the configuration file that is used in training with :code:`parse_config()`.
Because data to predict with always have no label, and output of prediction work
normally is the output layer rather than the cost layer, so you should modify
......@@ -36,7 +36,7 @@ python's :code:`help()` function. Let's walk through the above python script:
- Note: As swig_paddle can only accept C++ matrices, we offer a utility
class DataProviderConverter that can accept the same input data with
PyDataProvider2, for more information please refer to document
of :ref:`api_pydataprovider2_en` .
of :ref:`api_pydataprovider2` .
* Do the prediction with :code:`forwardTest()`, which takes the converted
input data and outputs the activations of the output layer.
......
......@@ -79,7 +79,7 @@ language = 'zh_CN'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']
exclude_patterns = ['_build', '**/*_en*', '*_en*']
# The reST default role (used for this markup: `text`) to use for all
# documents.
......
......@@ -80,7 +80,7 @@ language = None
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']
exclude_patterns = ['_build', '**/*_cn*', '*_cn*']
# The reST default role (used for this markup: `text`) to use for all
# documents.
......
```eval_rst
.. _cmd_detail_introduction_en:
.. _cmd_detail_introduction:
```
# Detail Description
......
```eval_rst
.. _cmd_line_index_en:
.. _cmd_line_index:
```
# How to Set Command-line Parameters
......
......@@ -30,7 +30,7 @@ Then at the :code:`process` function, each :code:`yield` function will return th
yield src_ids, trg_ids, trg_ids_next
For more details description of how to write a data provider, please refer to :ref:`api_pydataprovider2_en` . The full data provider file is located at :code:`demo/seqToseq/dataprovider.py`.
For more details description of how to write a data provider, please refer to :ref:`api_pydataprovider2` . The full data provider file is located at :code:`demo/seqToseq/dataprovider.py`.
===============================================
Configure Recurrent Neural Network Architecture
......@@ -246,6 +246,6 @@ The code is listed below:
outputs(beam_gen)
Notice that this generation technique is only useful for decoder like generation process. If you are working on sequence tagging tasks, please refer to :ref:`semantic_role_labeling_en` for more details.
Notice that this generation technique is only useful for decoder like generation process. If you are working on sequence tagging tasks, please refer to :ref:`semantic_role_labeling` for more details.
The full configuration file is located at :code:`demo/seqToseq/seqToseq_net.py`.
```eval_rst
.. _demo_ml_dataset_en:
.. _demo_ml_dataset:
```
# MovieLens Dataset
......
......@@ -16,7 +16,7 @@ Data Preparation
````````````````
Download and extract dataset
''''''''''''''''''''''''''''
We use :ref:`demo_ml_dataset_en` here.
We use :ref:`demo_ml_dataset` here.
To download and unzip the dataset, simply run the following commands.
.. code-block:: bash
......@@ -264,7 +264,7 @@ In this :code:`dataprovider.py`, we should set\:
* use_seq\: Whether this :code:`dataprovider.py` in sequence mode or not.
* process\: Return each sample of data to :code:`paddle`.
The data provider details document see :ref:`api_pydataprovider2_en`.
The data provider details document see :ref:`api_pydataprovider2`.
Train
`````
......@@ -280,7 +280,7 @@ The run.sh is shown as follow:
It just start a paddle training process, write the log to `log.txt`,
then print it on screen.
Each command line argument in :code:`run.sh`, please refer to the :ref:`cmd_line_index_en` page. The short description of these arguments is shown as follow.
Each command line argument in :code:`run.sh`, please refer to the :ref:`cmd_line_index` page. The short description of these arguments is shown as follow.
* config\: Tell paddle which file is neural network configuration.
* save_dir\: Tell paddle save model into './output'
......
```eval_rst
.. _semantic_role_labeling_en:
.. _semantic_role_labeling:
```
# Semantic Role labeling Tutorial #
......
......@@ -186,7 +186,7 @@ def define_py_data_sources2(train_list, test_list, module, obj, args=None):
obj="process",
args={"dictionary": dict_name})
The related data provider can refer to :ref:`api_pydataprovider2_en_sequential_model` .
The related data provider can refer to :ref:`api_pydataprovider2_sequential_model` .
:param train_list: Train list name.
:type train_list: basestring
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册