提交 8fd3d396 编写于 作者: M MRXLT

fix mem_optim ir_optim use_mkl arguments

上级 19811d4e
...@@ -111,9 +111,9 @@ python -m paddle_serving_server.serve --model uci_housing_model --thread 10 --po ...@@ -111,9 +111,9 @@ python -m paddle_serving_server.serve --model uci_housing_model --thread 10 --po
| `port` | int | `9292` | Exposed port of current service to users| | `port` | int | `9292` | Exposed port of current service to users|
| `name` | str | `""` | Service name, can be used to generate HTTP request url | | `name` | str | `""` | Service name, can be used to generate HTTP request url |
| `model` | str | `""` | Path of paddle model directory to be served | | `model` | str | `""` | Path of paddle model directory to be served |
| `mem_optim` | bool | `False` | Enable memory / graphic memory optimization | | `mem_optim` | - | - | Enable memory / graphic memory optimization |
| `ir_optim` | bool | `False` | Enable analysis and optimization of calculation graph | | `ir_optim` | - | - | Enable analysis and optimization of calculation graph |
| `use_mkl` (Only for cpu version) | bool | `False` | Run inference with MKL | | `use_mkl` (Only for cpu version) | - | - | Run inference with MKL |
Here, we use `curl` to send a HTTP POST request to the service we just started. Users can use any python library to send HTTP POST as well, e.g, [requests](https://requests.readthedocs.io/en/master/). Here, we use `curl` to send a HTTP POST request to the service we just started. Users can use any python library to send HTTP POST as well, e.g, [requests](https://requests.readthedocs.io/en/master/).
</center> </center>
......
...@@ -115,9 +115,9 @@ python -m paddle_serving_server.serve --model uci_housing_model --thread 10 --po ...@@ -115,9 +115,9 @@ python -m paddle_serving_server.serve --model uci_housing_model --thread 10 --po
| `port` | int | `9292` | Exposed port of current service to users| | `port` | int | `9292` | Exposed port of current service to users|
| `name` | str | `""` | Service name, can be used to generate HTTP request url | | `name` | str | `""` | Service name, can be used to generate HTTP request url |
| `model` | str | `""` | Path of paddle model directory to be served | | `model` | str | `""` | Path of paddle model directory to be served |
| `mem_optim` | bool | `False` | Enable memory optimization | | `mem_optim` | - | - | Enable memory optimization |
| `ir_optim` | bool | `False` | Enable analysis and optimization of calculation graph | | `ir_optim` | - | - | Enable analysis and optimization of calculation graph |
| `use_mkl` (Only for cpu version) | bool | `False` | Run inference with MKL | | `use_mkl` (Only for cpu version) | - | - | Run inference with MKL |
我们使用 `curl` 命令来发送HTTP POST请求给刚刚启动的服务。用户也可以调用python库来发送HTTP POST请求,请参考英文文档 [requests](https://requests.readthedocs.io/en/master/)。 我们使用 `curl` 命令来发送HTTP POST请求给刚刚启动的服务。用户也可以调用python库来发送HTTP POST请求,请参考英文文档 [requests](https://requests.readthedocs.io/en/master/)。
</center> </center>
......
...@@ -59,7 +59,7 @@ the script of client side bert_client.py is as follow: ...@@ -59,7 +59,7 @@ the script of client side bert_client.py is as follow:
import os import os
import sys import sys
from paddle_serving_client import Client from paddle_serving_client import Client
from paddle_serving_app import ChineseBertReader from paddle_serving_app.reader import ChineseBertReader
reader = ChineseBertReader() reader = ChineseBertReader()
fetch = ["pooled_output"] fetch = ["pooled_output"]
......
...@@ -52,7 +52,7 @@ pip install paddle_serving_app ...@@ -52,7 +52,7 @@ pip install paddle_serving_app
``` python ``` python
import sys import sys
from paddle_serving_client import Client from paddle_serving_client import Client
from paddle_serving_app import ChineseBertReader from paddle_serving_app.reader import ChineseBertReader
reader = ChineseBertReader() reader = ChineseBertReader()
fetch = ["pooled_output"] fetch = ["pooled_output"]
......
...@@ -40,10 +40,14 @@ def parse_args(): # pylint: disable=doc-string-missing ...@@ -40,10 +40,14 @@ def parse_args(): # pylint: disable=doc-string-missing
parser.add_argument( parser.add_argument(
"--device", type=str, default="cpu", help="Type of device") "--device", type=str, default="cpu", help="Type of device")
parser.add_argument( parser.add_argument(
"--mem_optim", type=bool, default=False, help="Memory optimize") "--mem_optim",
default=False,
action="store_true",
help="Memory optimize")
parser.add_argument( parser.add_argument(
"--ir_optim", type=bool, default=False, help="Graph optimize") "--ir_optim", default=False, action="store_true", help="Graph optimize")
parser.add_argument("--use_mkl", type=bool, default=False, help="Use MKL") parser.add_argument(
"--use_mkl", default=False, action="store_true", help="Use MKL")
parser.add_argument( parser.add_argument(
"--max_body_size", "--max_body_size",
type=int, type=int,
......
...@@ -47,9 +47,12 @@ def serve_args(): ...@@ -47,9 +47,12 @@ def serve_args():
parser.add_argument( parser.add_argument(
"--name", type=str, default="None", help="Default service name") "--name", type=str, default="None", help="Default service name")
parser.add_argument( parser.add_argument(
"--mem_optim", type=bool, default=False, help="Memory optimize") "--mem_optim",
default=False,
action="store_true",
help="Memory optimize")
parser.add_argument( parser.add_argument(
"--ir_optim", type=bool, default=False, help="Graph optimize") "--ir_optim", default=False, action="store_true", help="Graph optimize")
parser.add_argument( parser.add_argument(
"--max_body_size", "--max_body_size",
type=int, type=int,
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册