| `ir_optim` | - | - | Enable analysis and optimization of calculation graph |
| `use_mkl` (Only for cpu version) | - | - | Run inference with MKL |
Here, we use `curl` to send a HTTP POST request to the service we just started. Users can use any python library to send HTTP POST as well, e.g, [requests](https://requests.readthedocs.io/en/master/).
</center>
...
...
@@ -170,13 +170,13 @@ Here, `client.predict` function has two arguments. `feed` is a `python dict` wit
### About Efficiency
-[How to profile Paddle Serving latency?](python/examples/util)
-[How to optimize performance?(Chinese)](doc/PERFORMANCE_OPTIM_CN.md)
-[How to optimize performance?](doc/PERFORMANCE_OPTIM.md)
-[Deploy multi-services on one GPU(Chinese)](doc/MULTI_SERVICE_ON_ONE_GPU_CN.md)