diff --git a/doc/ABTEST_IN_PADDLE_SERVING.md b/doc/ABTEST_IN_PADDLE_SERVING.md index 931da839d2d84aca8b3116201ba34a074db6f0e9..69e5ff4b6fdf11d3764f94cba83beee82f959c85 100644 --- a/doc/ABTEST_IN_PADDLE_SERVING.md +++ b/doc/ABTEST_IN_PADDLE_SERVING.md @@ -19,6 +19,7 @@ sh get_data.sh The following Python code will process the data `test_data/part-0` and write to the `processed.data` file. +[//file]:#process.py ``` python from imdb_reader import IMDBDataset imdb_dataset = IMDBDataset() @@ -59,7 +60,8 @@ exit Run the following Python code on the host computer to start client. Make sure that the host computer is installed with the `paddle-serving-client` package. -``` go +[//file]:#ab_client.py +``` python from paddle_serving_client import Client client = Client() @@ -94,3 +96,24 @@ When making prediction on the client side, if the parameter `need_variant_tag=Tr [lstm](total: 1867) acc: 0.490091055169 [bow](total: 217) acc: 0.73732718894 ``` + + diff --git a/doc/BERT_10_MINS.md b/doc/BERT_10_MINS.md index e668b3207c5228309d131e2353e815d26c8d4625..71f6f065f4101aae01e077910fc5b6bd6b039b46 100644 --- a/doc/BERT_10_MINS.md +++ b/doc/BERT_10_MINS.md @@ -102,4 +102,5 @@ if [[ $? -eq 0 ]]; then else echo "test fail" fi +ps -ef | grep "paddle_serving_server" | grep -v grep | awk '{print $2}' | xargs kill --> diff --git a/doc/TRAIN_TO_SERVICE.md b/doc/TRAIN_TO_SERVICE.md index c0d7d2ea3794a07e39407bb0b53a822cfedd173e..40d5dd95e4d7aad3b198898559321419b4b17833 100644 --- a/doc/TRAIN_TO_SERVICE.md +++ b/doc/TRAIN_TO_SERVICE.md @@ -288,7 +288,7 @@ The script receives data from standard input and prints out the probability that The client implemented in the previous step runs the prediction service as an example. The usage method is as follows: ```shell -cat test_data/part-0 | python test_client.py imdb_lstm_client_conf / serving_client_conf.prototxt imdb.vocab +cat test_data/part-0 | python test_client.py imdb_lstm_client_conf/serving_client_conf.prototxt imdb.vocab ``` Using 2084 samples in the test_data/part-0 file for test testing, the model prediction accuracy is 88.19%. @@ -350,7 +350,7 @@ In the above command, the first parameter is the saved server-side model and con After starting the HTTP prediction service, you can make prediction with a single command: ``` -curl -H "Content-Type: application / json" -X POST -d '{"words": "i am very sad | 0", "fetch": ["prediction"]}' http://127.0.0.1:9292/imdb/prediction +curl -H "Content-Type: application/json" -X POST -d '{"words": "i am very sad | 0", "fetch": ["prediction"]}' http://127.0.0.1:9292/imdb/prediction ``` When the inference process is normal, the prediction probability is returned, as shown below. diff --git a/doc/doc_test_list b/doc/doc_test_list index ef019de05d6075801434bae91de8cbdceb1fea91..8812f85e27f5230c68ad609e37840bbcc6589270 100644 --- a/doc/doc_test_list +++ b/doc/doc_test_list @@ -1 +1,2 @@ BERT_10_MINS.md +ABTEST_IN_PADDLE_SERVING.md