Running the program sample in this directory requires the version of the PaddlePaddle is v0.10.0. If the version is below this requirement, following the [instructions](http://www.paddlepaddle.org/docs/develop/documentation/zh/build_and_install/pip_install_cn.html) in the document about installation to update your Paddlepaddle's version. # Learning To Rank Learning to rank[1] is a method to build the ranking model of machine learning,which plays an important role in the computer science scene such as information retrieval, natural language processing and data mining. The primary purpose of learning to rank is to order a document that reflects the relevance of any query request for a given set of documents. In this example, using the annotated Corpus training two classical ranking models RankNet[4] and LamdaRank[6],the corresponding ranking model can be generated, and the correlation documents can be sorted by any query request. ## Background Information Learning to rank is the application of machine learning. On the one hand, the manual ranking rules can not deal with the large scale of the candidate data, on the other hand can not give the appropriate weight for the candidate data of different channels, so it is widely used in daily life.Learning to rank originated in the field of information retrieval and is still the core parts of many information retrieval systems,such as the ranking of search results in search engine,ranking of candidate data in the recommendation system,and online advertising, and so on. In this case, we use the document retrieval task to illustrate the learning to rank model. ![image](https://github.com/PaddlePaddle/models/blob/develop/ltr/images/search_engine_example.png?raw=true) Figure.1 the role of ranking model in the typical application search engine of document retrieval. Assuming that there is a set of documents $S$, the document retrieval task is based on the relevance of the requests to give the order of the documents. According to the query request, the query engine will score every document according to the query request, and arrange the documents in reverse order according to the grading, and get the query results.Given a query and corresponding documents, the model is trained based the scoring of the document sorts. When it goes to the predicted phase, the model will generate the document sort according to the query received. The common ranking learning methods are divided into the following three categories. - Pointwise approach In this case,the learning-to-rank problem can be viewed as a regression problem.The input single sample is the **score-document**,the correlation score of each query-Document pair is used as the real number or the sequence number,so the individual query-document pairs are uesd as a sample point (the origin of the word pointwise) to train the ranking model.When predicting,the correlation score of query-document pair is given for the specified input. - Pairwise approach In this case, the learning-to-rank problem is approximated by a classification problem — learning a binary classifier that can tell which document is better in a given pair of documents.The single input sample is the **label-document pair**.For multiple result documents of one query,any two documents are combined to form document pairs as input samples.Any two documents are combined to form document pairs as the input samples.That is to learn a two classifier, the input is a pair of documents A-B (the origin of Pairwise), according to whether the correlation of A is better than B,the two classifier gives the classification label 1 or 0.After classifying all the document pairs,we can get a set of partial order relations to construct the order relation of the documents.The principle of this kind of the method is to reduce the number of the reverse order document pairs in the order of the given pair of documents $S$,so as to achieve the goal of optimizing the sorting result. - Listwise approach These algorithms try to directly optimize the value of one of the above evaluation measures, averaged over all queries in the training data.The single input sample is a **document arranged**. By constructing the appropriate measurement function to measure the difference between the current document ranking and the optimal ranking,then optimizes the evaluation measures to get the ranking model. It is difficult to optimize because most of the ranking loss function are not continuous smooth functions. ![image](https://github.com/PaddlePaddle/models/blob/develop/ltr/images/learning_to_rank.jpg?raw=true) Figure.2 Three methods of the ranking model ## Experimental data The experimental data in this example uses the LETOR corpus of benchmarking data in the Ranking learning, part of the query results is from the Gov2 website, which contains about 1700 query request result document lists and has made manual annotations on the relevance of the documents.Among them,a query contains a unique query id,corresponding to a number of related documents,forming a query request result list.The feature vector of Each document is represented by the one-dimensional array,and corresponds to a correlation score between the human annotation and the query. This example automatically downloads the LETOR MQ2007 dataset and cache when it is first running,without manual downloading. The Data sets of **mq2007** provide a generation format for three types of the ranking models respectively.Which is need to specify the **format**. for example,the call interface ``` pairwise_train_dataset = functools.partial(paddle.dataset.mq2007.train, format="pairwise") for label, left_doc, right_doc in pairwise_train_dataset(): ... ``` ## Model overview For the ranking model, the RankNet model of the Pairwise method and the LambdaRank of the Listwise method are provided in this example, respectively representing two types of the learning methods. The ranking model of the Pointwise method can be degraded to the regression problem. Please refer to the [recommendation system](https://github.com/PaddlePaddle/book/blob/develop/05.recommender_system/README.cn.md) in the PaddleBook. ## RankNet model [RankNet](http://icml.cc/2015/wp-content/uploads/2015/06/icml_ranking.pdf) is a classic Pairwise ranking learning method, which is a typical forward neural network ranking model. The $i$ document in the document collection $S$ is denoted as $U_i$, its document feature vector is denoted as $x_i$, and for a given document pair $U_i$, $U_j$, RankNet maps the input single document feature vector $x$ to $f(x)$, and gets $s_i=f(x_i),$s_j=f(x_j)$.The probability that the correlation of $U_i$ is better than $U_j$ is recorded as $P_{i,j}$. $$P_{i,j}=P(U_{i}>U_{j})=\frac{1}{1+e^{-\sigma (s_{i}-s_{j}))}}$$ Because most of the rank metric functions are mostly non-continuous and non-smooth, the ranknet needs a metric function $C$ that can be optimized.First,the cross entropy is used as a measure function to measure the prediction cost, and the loss function $C$ is recorded as $$C_{i,j}=-\bar{P_{i,j}}logP_{i,j}-(1-\bar{P_{i,j}})log(1-P_{i,j})$$ The $\bar{P_{i,j}}$ represents the true probability, which is recorded as $$\bar{P_{i,j}}=\frac{1}{2}(1+S_{i,j})$$ $S_{i,j}$ = {+1,0}, which represents the label of pair consisting of $U_i$ and $U_j$,that is, whethe the Ui correlation is better than $U_j$. Finally, a derivable metric loss function is obtained $$C=\frac{1}{2}(1-S_{i,j})\sigma (s_{i}-s{j})+log(1+e^{-\sigma (s_{i }-s_{j})})$$ It can be optimized using conventional gradient descent methods. See [RankNet](http://icml.cc/2015/wp-content/uploads/2015/06/icml_ranking.pdf) for details. Meanwhile, get the gradient information of document $U_i$ in the ranking optimization process. $$\lambda {i,j}=\frac{\partial C}{\partial s{i}} = \frac{1}{2}(1-S_{i,j})-\frac{1} {1+e^{\sigma (s_{i}-s_{j})}}$$ The meaning of the expression is the increase or decrease of the document $U_i$ during this round of sorting optimization. Based on the above inference, the RankNet network structure is constructed, which is composed of several layers of hidden layers and full connected layers. As shown in the figure, the document features are used in the hidden layers, and the all connected layer is transformed by layer by layer,completing the transformation from the underlying feature space to the high-level feature space. The structure of docA and docB is symmetrical and they are input into the final RankCost layer. ![image](https://github.com/sunshine-2015/models/blob/patch-4/ltr/images/ranknet_en.png?raw=true) Figure.3 The structure diagram of RankNet network - Full connected layer: means that each node in the previous layer is connected to the underlying network. In this example, **paddle.layer.fc** is also used. Note that the full connection layer dimension input to the RankCost layer is 1. - RankCost layer: The RankCost layer is the core of the RankNet ranking network and measures whether the docA correlation is better than the docB. Give the predicted value and compare it with the label. Cross entropy is used as a measure of the loss function, using a gradient descent method for optimization. Details can be found in [RankNet](http://icml.cc/2015/wp-content/uploads/2015/06/icml_ranking.pdf) [4]. Because the network structure in Pairwise is Left-right symmetrical, half of the network structure can be defined, and the other half share network parameters. The PaddlePaddle allows sharing of connections in the network structure, parameters with the same name will share parameters. Use the PaddlePaddle to implement the RankNet ranking model. The sample code for defining the network structure is given in the **half_ranknet** function in [ranknet.py](https://github.com/PaddlePaddle/models/blob/develop/ltr/ranknet.py). The structure defined in the ***half_ranknet*** function uses the same model structure as in FIG 3: two hidden layers, a fully connected layer with **hidden_size=10** and a fully connected layer with **hidden_size=1**. In this example, **input_dim** refers to the dimension of the characteristic of the input **single document**. The value of label is 1,0. Each input sample is the structure of **