未验证 提交 34cea753 编写于 作者: J Jiawei Wang 提交者: GitHub

Merge pull request #422 from wangjiawei04/doc_night

Add ABTEST doc test
......@@ -19,6 +19,7 @@ sh get_data.sh
The following Python code will process the data `test_data/part-0` and write to the `processed.data` file.
[//file]:#process.py
``` python
from imdb_reader import IMDBDataset
imdb_dataset = IMDBDataset()
......@@ -59,7 +60,8 @@ exit
Run the following Python code on the host computer to start client. Make sure that the host computer is installed with the `paddle-serving-client` package.
``` go
[//file]:#ab_client.py
``` python
from paddle_serving_client import Client
client = Client()
......@@ -94,3 +96,24 @@ When making prediction on the client side, if the parameter `need_variant_tag=Tr
[lstm](total: 1867) acc: 0.490091055169
[bow](total: 217) acc: 0.73732718894
```
<!--
cp ../Serving/python/examples/imdb/get_data.sh .
cp ../Serving/python/examples/imdb/imdb_reader.py .
pip install -U paddle_serving_server
pip install -U paddle_serving_client
pip install -U paddlepaddle
sh get_data.sh
python process.py
python -m paddle_serving_server.serve --model imdb_bow_model --port 8000 --workdir workdir1 &
sleep 5
python -m paddle_serving_server.serve --model imdb_lstm_model --port 9000 --workdir workdir2 &
sleep 5
python ab_client.py >log.txt
if [[ $? -eq 0 ]]; then
echo "test success"
else
echo "test fail"
fi
ps -ef | grep "paddle_serving_server" | grep -v grep | awk '{print $2}' | xargs kill
-->
......@@ -102,4 +102,5 @@ if [[ $? -eq 0 ]]; then
else
echo "test fail"
fi
ps -ef | grep "paddle_serving_server" | grep -v grep | awk '{print $2}' | xargs kill
-->
......@@ -288,7 +288,7 @@ The script receives data from standard input and prints out the probability that
The client implemented in the previous step runs the prediction service as an example. The usage method is as follows:
```shell
cat test_data/part-0 | python test_client.py imdb_lstm_client_conf / serving_client_conf.prototxt imdb.vocab
cat test_data/part-0 | python test_client.py imdb_lstm_client_conf/serving_client_conf.prototxt imdb.vocab
```
Using 2084 samples in the test_data/part-0 file for test testing, the model prediction accuracy is 88.19%.
......@@ -350,7 +350,7 @@ In the above command, the first parameter is the saved server-side model and con
After starting the HTTP prediction service, you can make prediction with a single command:
```
curl -H "Content-Type: application / json" -X POST -d '{"words": "i am very sad | 0", "fetch": ["prediction"]}' http://127.0.0.1:9292/imdb/prediction
curl -H "Content-Type: application/json" -X POST -d '{"words": "i am very sad | 0", "fetch": ["prediction"]}' http://127.0.0.1:9292/imdb/prediction
```
When the inference process is normal, the prediction probability is returned, as shown below.
......
BERT_10_MINS.md
ABTEST_IN_PADDLE_SERVING.md
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册