@@ -12,7 +12,7 @@ This tutorial will teach the basics of deep learning (DL), including how to impl
...
@@ -12,7 +12,7 @@ This tutorial will teach the basics of deep learning (DL), including how to impl
To get started, please install PaddlePaddle on your computer. Throughout this tutorial, you will learn by implementing different DL models for text classification.
To get started, please install PaddlePaddle on your computer. Throughout this tutorial, you will learn by implementing different DL models for text classification.
To install PaddlePaddle, please follow the instructions here: <ahref = "../../build/index.html">Build and Install</a>.
To install PaddlePaddle, please follow the instructions here: <ahref = "../../getstarted/build_and_install/index_en.html">Build and Install</a>.
## Overview
## Overview
For the first step, you will use PaddlePaddle to build a **text classification** system. For example, suppose you run an e-commence website, and you want to analyze the sentiment of user reviews to evaluate product quality.
For the first step, you will use PaddlePaddle to build a **text classification** system. For example, suppose you run an e-commence website, and you want to analyze the sentiment of user reviews to evaluate product quality.
You can refer to the following link for more detailed examples and data formats: <ahref = "../../ui/data_provider/pydataprovider2.html">PyDataProvider2</a>.
You can refer to the following link for more detailed examples and data formats: <ahref = "../../api/data_provider/pydataprovider2_en.html">PyDataProvider2</a>.
## Network Architecture
## Network Architecture
You will describe four kinds of network architectures in this section.
You will describe four kinds of network architectures in this section.
<center> ![](./PipelineNetwork_en.jpg)</center>
<center> ![](./PipelineNetwork_en.jpg)</center>
First, you will build a logistic regression model. Later, you will also get chance to build other more powerful network architectures.
First, you will build a logistic regression model. Later, you will also get chance to build other more powerful network architectures.
For more detailed documentation, you could refer to: <ahref = "../../ui/api/trainer_config_helpers/layers_index.html">Layer documentation</a>。All configuration files are in `demo/quick_start` directory.
For more detailed documentation, you could refer to: <ahref = "../../api/trainer_config_helpers/layers.html">layer documentation</a>. All configuration files are in `demo/quick_start` directory.
### Logistic Regression
### Logistic Regression
The architecture is illustrated in the following picture:
The architecture is illustrated in the following picture:
...
@@ -366,7 +366,7 @@ You can use single layer LSTM model with Dropout for our text classification pro
...
@@ -366,7 +366,7 @@ You can use single layer LSTM model with Dropout for our text classification pro
<br>
<br>
## Optimization Algorithm
## Optimization Algorithm
<ahref = "../../ui/api/trainer_config_helpers/optimizers.html">Optimization algorithms</a> include Momentum, RMSProp, AdaDelta, AdaGrad, Adam, and Adamax. You can use Adam optimization method here, with L2 regularization and gradient clipping, because Adam has been proved to work very well for training recurrent neural network.
<ahref = "../../api/trainer_config_helpers/optimizers.html">Optimization algorithms</a> include Momentum, RMSProp, AdaDelta, AdaGrad, Adam, and Adamax. You can use Adam optimization method here, with L2 regularization and gradient clipping, because Adam has been proved to work very well for training recurrent neural network.
```python
```python
settings(batch_size=128,
settings(batch_size=128,
...
@@ -391,7 +391,8 @@ paddle train \
...
@@ -391,7 +391,8 @@ paddle train \
--use_gpu=false
--use_gpu=false
```
```
If you want to install the remote training platform, which enables distributed training on clusters, follow the instructions here: <ahref = "../../cluster/index.html">Platform</a> documentation. We do not provide examples on how to train on clusters. Please refer to other demos or platform training documentation for mode details on training on clusters.
We do not provide examples on how to train on clusters here. If you want to train on clusters, please follow the <ahref = "../../howto/cluster/cluster_train_en.html">distributed training</a> documentation or other demos for more details.
## Inference
## Inference
You can use the trained model to perform prediction on the dataset with no labels. You can also evaluate the model on dataset with labels to obtain its test accuracy.
You can use the trained model to perform prediction on the dataset with no labels. You can also evaluate the model on dataset with labels to obtain its test accuracy.
<center> ![](./PipelineTest_en.png)</center>
<center> ![](./PipelineTest_en.png)</center>
...
@@ -406,7 +407,7 @@ paddle train \
...
@@ -406,7 +407,7 @@ paddle train \
--init_model_path=./output/pass-0000x
--init_model_path=./output/pass-0000x
```
```
We will give an example of performing prediction using Recurrent model on a dataset with no labels. You can refer to: <ahref = "../../ui/predict/swig_py_paddle_en.html">Python Prediction API</a> tutorial,or other <ahref = "../../demo/index.html">demo</a> for the prediction process using Python. You can also use the following script for inference or evaluation.
We will give an example of performing prediction using Recurrent model on a dataset with no labels. You can refer to<ahref = "../../api/predict/swig_py_paddle_en.html">Python Prediction API</a> tutorial,or other <ahref = "../../tutorials/index_en.html">demo</a> for the prediction process using Python. You can also use the following script for inference or evaluation.
inference script (predict.sh):
inference script (predict.sh):
...
@@ -508,7 +509,7 @@ The scripts of data downloading, network configurations, and training scrips are
...
@@ -508,7 +509,7 @@ The scripts of data downloading, network configurations, and training scrips are
*\--config_args:Other configuration arguments.
*\--config_args:Other configuration arguments.
*\--init_model_path:The path of the initial model parameter.
*\--init_model_path:The path of the initial model parameter.
By default, the trainer will save model every pass. You can also specify `saving_period_by_batches` to set the frequency of batch saving. You can use `show_parameter_stats_period` to print the statistics of the parameters, which are very useful for tuning parameters. Other command line arguments can be found in <ahref = "../../ui/index.html#command-line-argument">command line argument documentation</a>。
By default, the trainer will save model every pass. You can also specify `saving_period_by_batches` to set the frequency of batch saving. You can use `show_parameter_stats_period` to print the statistics of the parameters, which are very useful for tuning parameters. Other command line arguments can be found in <ahref = "../../howto/cmd_parameter/index_en.html">command line argument documentation</a>。