| `use_trt` (Only for trt version) | - | - | Run inference with TensorRT |
| `use_trt` (Only for trt version) | - | - | Run inference with TensorRT |
</center>
</center>
``` python
```python
# A user can visit rpc service through paddle_serving_client API
# A user can visit rpc service through paddle_serving_client API
frompaddle_serving_clientimportClient
frompaddle_serving_clientimportClient
importnumpyasnp
importnumpyasnp
...
@@ -147,13 +139,6 @@ print(fetch_map)
...
@@ -147,13 +139,6 @@ print(fetch_map)
```
```
Here, `client.predict` function has two arguments. `feed` is a `python dict` with model input variable alias name and values. `fetch` assigns the prediction variables to be returned from servers. In the example, the name of `"x"` and `"price"` are assigned when the servable model is saved during training.
Here, `client.predict` function has two arguments. `feed` is a `python dict` with model input variable alias name and values. `fetch` assigns the prediction variables to be returned from servers. In the example, the name of `"x"` and `"price"` are assigned when the servable model is saved during training.
<h2align="center">Some Key Features of Paddle Serving</h2>
- Integrate with Paddle training pipeline seamlessly, most paddle models can be deployed **with one line command**.
-**Industrial serving features** supported, such as models management, online loading, online A/B testing etc.
-**Distributed Key-Value indexing** supported which is especially useful for large scale sparse features as model inputs.
-**Highly concurrent and efficient communication** between clients and servers supported.
-**Multiple programming languages** supported on client side, such as Golang, C++ and python.
### WEB service
### WEB service
...
@@ -189,6 +174,14 @@ the response is
...
@@ -189,6 +174,14 @@ the response is
{"result":{"price":[[18.901151657104492]]}}
{"result":{"price":[[18.901151657104492]]}}
```
```
<h2align="center">Some Key Features of Paddle Serving</h2>
- Integrate with Paddle training pipeline seamlessly, most paddle models can be deployed **with one line command**.
-**Industrial serving features** supported, such as models management, online loading, online A/B testing etc.
-**Distributed Key-Value indexing** supported which is especially useful for large scale sparse features as model inputs.
-**Highly concurrent and efficient communication** between clients and servers supported.
-**Multiple programming languages** supported on client side, such as Golang, C++ and python.
<h2align="center">Document</h2>
<h2align="center">Document</h2>
### New to Paddle Serving
### New to Paddle Serving
...
@@ -235,6 +228,10 @@ To connect with other users and contributors, welcome to join our [Slack channel
...
@@ -235,6 +228,10 @@ To connect with other users and contributors, welcome to join our [Slack channel
If you want to contribute code to Paddle Serving, please reference [Contribution Guidelines](doc/CONTRIBUTE.md)
If you want to contribute code to Paddle Serving, please reference [Contribution Guidelines](doc/CONTRIBUTE.md)
- Special Thanks to [@BeyondYourself](https://github.com/BeyondYourself) in complementing the gRPC tutorial, updating the FAQ doc and modifying the mdkir command
- Special Thanks to [@mcl-stone](https://github.com/mcl-stone) in updating faster_rcnn benchmark
- Special Thanks to [@cg82616424](https://github.com/cg82616424) in updating the unet benchmark and modifying resize comment error
### Feedback
### Feedback
For any feedback or to report a bug, please propose a [GitHub Issue](https://github.com/PaddlePaddle/Serving/issues).
For any feedback or to report a bug, please propose a [GitHub Issue](https://github.com/PaddlePaddle/Serving/issues).