curl -H"Content-Type:application/json"-X POST -d'{"feed":[{"url": "https://paddle-serving.bj.bcebos.com/imagenet-example/daisy.jpg"}], "fetch": ["score"]}' http://127.0.0.1:9292/image/prediction
```
-**Request result**:
``` shell
{"label":"daisy","prob":0.9341403245925903}
```
<h2align="center">Some Key Features</h2>
- Integrate with Paddle training pipeline seamlessly, most paddle models can be deployed **with one line command**.
-**Industrial serving features** supported, such as models management, online loading, online A/B testing etc.
-**Distributed Key-Value indexing** supported which is especially useful for large scale sparse features as model inputs.
-**Highly concurrent and efficient communication** between clients and servers supported.
-**Multiple programming languages** supported on client side, such as Golang, C++ and python.
-**Extensible framework design** which can support model serving beyond Paddle.
<h2align="center">Quick Start Example</h2>
### Boston House Price Prediction model
...
...
@@ -120,66 +176,6 @@ print(fetch_map)
```
Here, `client.predict` function has two arguments. `feed` is a `python dict` with model input variable alias name and values. `fetch` assigns the prediction variables to be returned from servers. In the example, the name of `"x"` and `"price"` are assigned when the servable model is saved during training.
<h2align="center"> Pre-built services with Paddle Serving</h2>
<h3align="center">Chinese Word Segmentation</h4>
-**Description**:
``` shell
Chinese word segmentation HTTP service that can be deployed with one line command.