提交 20ff040c 编写于 作者: H HexToString

add doc c++ creating

上级 95d382a9
# 从零开始写一个预测服务
## 1. 示例说明
图像分类是根据图像的语义信息将不同类别图像区分开来,是计算机视觉中重要的基本问题,也是图像检测、图像分割、物体跟踪、行为分析等其他高层视觉任务的基础。图像分类在很多领域有广泛应用,包括安防领域的人脸识别和智能视频分析等,交通领域的交通场景识别,互联网领域基于内容的图像检索和相册自动归类,医学领域的图像识别等。
Paddle Serving已经提供了一个基于ResNet的模型预测服务,按照INSTALL.md中所述步骤,编译Paddle Serving,然后按GETTING_STARTED.md所述步骤启动client端和server端即可看到预测服务运行效果。
本文接下来以图像分类任务为例,介绍从零搭建一个模型预测服务的步骤。
## 2. Serving端
### 2.1 定义预测接口
**添加文件:serving/proto/image_class.proto**
Paddle Serving服务端与客户端通过brpc进行通信,通信协议和格式可以自定,我们选择baidu_std协议。这是一种以protobuf为基本数据交换格式的协议,其说明可参考[BRPC文档: baidu_std](https://github.com/apache/incubator-brpc/blob/master/docs/cn/baidu_std.md)
我们编写图像分类任务预测接口的protobuf如下:
```c++
syntax="proto2";
import "pds_option.proto";
import "builtin_format.proto";
package baidu.paddle_serving.predictor.image_classification;
option cc_generic_services = true;
message ClassifyResponse {
repeated baidu.paddle_serving.predictor.format.DensePrediction predictions = 1;
};
message Request {
repeated baidu.paddle_serving.predictor.format.XImageReqInstance instances = 1;
};
message Response {
// Each json string is serialized from ClassifyResponse predictions
repeated baidu.paddle_serving.predictor.format.XImageResInstance predictions = 1;
};
service ImageClassifyService {
rpc inference(Request) returns (Response);
rpc debug(Request) returns (Response);
option (pds.options).generate_impl = true;
};
```
其中:
`service ImageClassifiyService`定义一个RPC Service,并声明2个RPC接口:`inference``debug`,分别接受`Reqeust`类型请求参数,并返回`Response`类型结果。
`DensePrediction`, `XImageReqInstance``XImageResInstance`类型的消息分别在其他.proto文件中定义,因此要通过`import 'builtin_format.proto'`语句将需要的类型引入。
`generate_impl = true`: 告诉protobuf编译器,生成RPC service的实现 (在client端,此处为`generate_stub = true`,告诉protobuf编译器生成RPC的stub)
### 2.2 Server端实现
图像分类任务的处理,设计分为3个阶段,对应3个OP
- 读请求:从Request消息解出请求样例数据
- 调用Paddle预测lib的接口,对样例进行预测,并保存
- 预测结果写到Response
此后,框架将负责将Response回传给client端
#### 2.2.1 定制Op算子
**在serving/op/目录下添加reader_op.cpp, classify_op.cpp, write_json_op.cpp**
- 预处理算子(ReaderOp, serving/op/reader_op.cpp):从Request中读取图像字节流,通过opencv解码,填充tensor对象并输出到channel;
- 预测调用算子(ClassifyOp, serving/op/classify_op.cpp):从ImageReaderOp的channel获得输入tensor,临时申请输出tensor,调用ModelToolkit进行预测,并将输出tensor写入channel
- 后处理算子(WriteJsonOp, serving/op/write_json.cpp):从ImageClassifyop的channel获得输出tensor,将其序列化为json字符串,写入作为rpc的output
具体实现可参考demo中的源代码
#### 2.2.2 示例配置
关于Serving端的配置的详细信息,可以参考[Serving端配置](../SERVING_CONFIGURE.md)
以下配置文件将ReaderOP, ClassifyOP和WriteJsonOP串联成一个workflow (关于OP/workflow等概念,可参考[设计文档](DESIGN.md))
- 配置文件示例:
**添加文件 serving/conf/service.prototxt**
```shell
services {
name: "ImageClassifyService"
workflows: "workflow1"
}
```
**添加文件 serving/conf/workflow.prototxt**
```shell
workflows {
name: "workflow1"
workflow_type: "Sequence"
nodes {
name: "image_reader_op"
type: "ReaderOp"
}
nodes {
name: "image_classify_op"
type: "ClassifyOp"
dependencies {
name: "image_reader_op"
mode: "RO"
}
}
nodes {
name: "write_json_op"
type: "WriteJsonOp"
dependencies {
name: "image_classify_op"
mode: "RO"
}
}
}
```
以下配置文件为模型加载配置
**添加文件 serving/conf/resource.prototxt**
```shell
model_manager_path: ./conf
model_manager_file: model_toolkit.prototxt
```
**添加文件 serving/conf/model_toolkit.prototxt**
```shell
engines {
name: "image_classification_resnet"
type: "FLUID_CPU_NATIVE_DIR"
reloadable_meta: "./data/model/paddle/fluid_time_file"
reloadable_type: "timestamp_ne"
model_data_path: "./data/model/paddle/fluid/SE_ResNeXt50_32x4d"
runtime_thread_num: 0
batch_infer_size: 0
enable_batch_align: 0
}
```
#### 2.2.3 代码编译
Serving端代码包含如下部分:
- protobuf接口文件,需要编译成.pb.cc及.pb.h文件并链接到最终可执行文件
- OP算子实现,需要链接到最终可执行文件
- Paddle serving框架代码,封装在libpdserving.a中,需要链接到最终可执行文件
- Paddle serving封装paddle-fluid预测库的代码,在inferencer-fluid-cpu/目录产出的libfluid_cpu_engine.a中
- 其他第三方依赖库:paddle预测库,brpc, opencv等
1) protobuf接口文件编译: 不能用protoc默认插件编译,需要编译成paddle serving定制的.pb.cc及.pb.h文件。具体命令是
```shell
$ protoc --cpp_out=/path/to/paddle-serving/build/serving/ --pdcodegen_out=/path/to/paddle-serving/ --plugin=protoc-gen-pdcodegen=/path/to/paddle-serving/build/predictor/pdcodegen --proto_path=/path/to/paddle-serving/predictor/proto
```
其中
`pdcodegen`是由predictor/src/pdcodegen.cpp编译成的protobuf编译插件, --proto_path用来指定去哪里寻找`import`语句需要的protobuf文件
predictor/proto目录下有serving端和client端都要包含的builtin_format.proto和pds_option.proto
**NOTE**
上述protoc命令在Paddle serving编译系统中被封装成一个CMake函数了,在cmake/generic.cmake::PROTOBUF_GENERATE_SERVING_CPP
CMakeLists.txt中调用函数的方法为:
```shell
PROTOBUF_GENERATE_SERVING_CPP(PROTO_SRCS PROTO_HDRS xxx.proto)
```
2) OP
serving/op/目录下OP对应的.cpp文件
3) Paddle Serving框架代码,封装在predictor目录产出的libpdserving.a中
4) Paddle Serving封装paddle-fluid预测库的代码,在inference-fluid-cpu/目录产出的libfluid_cpu_engine.a中
5) serving端main函数
为简化用户编写初始化代码的工作量,serving端必须的初始化过程已经由paddle Serving框架提供,请参考predictor/src/pdserving.cpp。该文件中包含了完整的初始化过程,用户只需提供合适的配置文件列表即可(请参考2.2.2节),不必编写main函数
6) 第三方依赖库
brpc, paddle-fluid, opencv等第三方库,
7) 链接
整个链接过程在CMakeLists.txt中写法如下:
```shell
target_link_libraries(serving opencv_imgcodecs
${opencv_depend_libs} -Wl,--whole-archive fluid_cpu_engine
-Wl,--no-whole-archive pdserving paddle_fluid ${paddle_depend_libs}
${MKLML_LIB} ${MKLML_IOMP_LIB} -lpthread -lcrypto -lm -lrt -lssl -ldl -lz)
```
### 2.3 gflags配置项
以下是serving端支持的gflag配置选项列表,并提供了默认值。
| name | 默认值 | 含义 |
|------|--------|------|
|workflow_path|./conf|workflow配置目录名|
|workflow_file|workflow.prototxt|workflow配置文件名|
|inferservice_path|./conf|service配置目录名|
|inferservice_file|service.prototxt|service配置文件名|
|resource_path|./conf|资源管理器目录名|
|resource_file|resource.prototxt|资源管理器文件名|
|reload_interval_s|10|重载线程间隔时间(s)|
|enable_model_toolkit|true|模型管理|
|enable_protocol_list|baidu_std|brpc 通信协议列表|
|log_dir|./log|log dir|
|num_threads|brpc server使用的系统线程数,默认为CPU核数|
|max_concurrency|并发处理的请求数,设为<=0则为不予限制,若大于0则限定brpc server端同时处理的请求数上限|
可以通过在serving/conf/gflags.conf覆盖默认值,例如
```
--log_dir=./serving_log/
```
将指定日志目录到./serving_log目录下
## 3. Client端
### 3.1 定义预测接口
**在sdk-cpp/proto添加image_class.proto**
与serving端预测接口protobuf文件基本一致,只要将`generate_impl=true`改为`generate_stub=true`
```c++
import "pds_option.proto";
...
service ImageClassifyService {
rpc inference(Request) returns (Response);
rpc debug(Request) returns (Response);
// 改动:打开generate_stub开关(以支持配置驱动)
option (pds.options).generate_stub = true;
};
```
### 3.2 Client端逻辑
Paddle Serving提供的C++ SDK在sdk-cpp/目录中,入口为sdk-cpp/include/predictor_sdk.h中的`class PredictorApi`类。
该类的主要接口:
```C++
class PredictroApi {
// 创建PredictorApi句柄,输入为client端配置文件predictor.prototxt的目录和文件名
int create(const char *path, const char *file);
// 线程级初始化
int thrd_initialize();
// 根据名称获取Predictor句柄; ep_name对应predictor.prototxt中predictors的name字段
Predictor *fetch_predictor(std::string ep_name);
};
class Predictor {
// 预测
int inference(google::protobuf::Message *req, google::protobuf::Message *res);
// Debug模式
int debug(google::protobuf::Message *req,
google::protobuf::Message *res,
buitl::IOBufBuilder *debug_os);
};
```
#### 3.2.1 请求逻辑
**增加sdk-cpp/demo/ximage.cpp**
```c++
// 进程级初始化
PredictorApi api;
if (api.create("./conf/", "predictors.prototxt") == 0) {
return -1;
}
// 线程级预测调用:
Request req;
Response res;
api.thrd_initialize();
// Call this before every request
api.thrd_clear();
create_req(&req);
Predictor* predictor = api.fetch_predictor("ximage");
if (predictor == NULL) {
return -1;
}
if (predictor->inference(req, res) != 0) {
return -1;
}
// parse response
print_res(res);
// 线程级销毁
api.thrd_finalize();
// 进程级销毁
api.destroy();
```
具体实现可参考paddle Serving提供的例子sdk-cpp/demo/ximage.cpp
### 3.3 链接
Client端可执行文件包含的代码有:
- protobuf接口文件,需要编译成.pb.cc及.pb.h文件并链接到最终可执行文件
- main函数,以及调用SDK接口访问预测服务的逻辑,见3.2.1节
- Client端读取并维护predictor信息列表的代码,在sdk-cpp/目录产出的libsdk-cpp.a
- 因为protobuf接口文件用到了predictor/proto/目录下的builtin_format.proto和pds_option.proto,因此还需要联编libpdserving.a
1) protobuf接口文件,同serving端,需要用predictor/src/pdcodegen.cpp产出的pdcodegen插件,配合protoc使用,具体命令为
```shell
$ protoc --cpp_out=/path/to/paddle-serving/build/serving/ --pdcodegen_out=/path/to/paddle-serving/ --plugin=protoc-gen-pdcodegen=/path/to/paddle-serving/build/predictor/pdcodegen --proto_path=/path/to/paddle-serving/predictor/proto
```
其中
`pdcodegen`是由predictor/src/pdcodegen.cpp编译成的protobuf编译插件, --proto_path用来指定去哪里寻找`import`语句需要的protobuf文件
**NOTE**
上述protoc命令在Paddle Serving编译系统中被封装成一个CMake函数了,在cmake/generic.cmake::PROTOBUF_GENERATE_SERVING_CPP
CMakeLists.txt中调用函数的方法为:
```shell
PROTOBUF_GENERATE_SERVING_CPP(PROTO_SRCS PROTO_HDRS xxx.proto)
```
2) main函数,以及调用SDK接口访问预测服务的逻辑
3) Client端读取并维护predictor信息列表的代码,在sdk-cpp/目录产出的libsdk-cpp.a
4) predictor/目录产出的libpdserving.a
最终链接命令如下:
```shell
add_executable(ximage ${CMAKE_CURRENT_LIST_DIR}/demo/ximage.cpp)
target_link_libraries(ximage -Wl,--whole-archive sdk-cpp
-Wl,--no-whole-archive pdserving -lpthread -lcrypto -lm -lrt -lssl -ldl
-lz)
```
### 3.4 连接配置
**增加配置文件sdk/conf/predictors.prototxt**
```shell
## 默认配置共享
default_variant_conf {
tag: "default"
connection_conf {
connect_timeout_ms: 2000
rpc_timeout_ms: 20000
connect_retry_count: 2
max_connection_per_host: 100
hedge_request_timeout_ms: -1
hedge_fetch_retry_count: 2
connection_type: "pooled"
}
naming_conf {
cluster_filter_strategy: "Default"
load_balance_strategy: "la"
}
rpc_parameter {
compress_type: 0
package_size: 20
protocol: "baidu_std"
max_channel_per_request: 3
}
}
predictors {
name: "ximage"
service_name: "baidu.paddle_serving.predictor.image_classification.ImageClassifyService"
endpoint_router: "WeightedRandomRender"
weighted_random_render_conf {
variant_weight_list: "50"
}
variants {
tag: "var1"
naming_conf {
cluster: "list://127.0.0.1:8010"
}
}
}
```
关于客户端的详细配置选项,可参考[CLIENT CONFIGURATION](../CLIENT_CONFIGURE.md)
# Paddle Serving Design
([简体中文](./DESIGN_CN.md)|English)
## 1. Background
PaddlePaddle is the Baidu's open source machine learning framework, which supports a wide range of customized development of deep learning models; Paddle serving is the online prediction framework of Paddle, which seamlessly connects with Paddle model training, and provides cloud services for machine learning prediction. This article will describe the Paddle Serving design from the bottom up, from the model, service, and access levels.
1. The model is the core of Paddle Serving prediction, including the management of model data and inference calculations;
2. Prediction framework encapsulation model for inference calculations, providing external RPC interface to connect different upstream
3. The prediction service SDK provides a set of access frameworks
The result is a complete serving solution.
## 2. Terms explanation
- **baidu-rpc**: Baidu's official open source RPC framework, supports multiple common communication protocols, and provides a custom interface experience based on protobuf
- **Variant**: Paddle Serving architecture is an abstraction of a minimal prediction cluster, which is characterized by all internal instances (replicas) being completely homogeneous and logically corresponding to a fixed version of a model
- **Endpoint**: Multiple Variants form an Endpoint. Logically, Endpoint represents a model, and Variants within the Endpoint represent different versions.
- **OP**: PaddlePaddle is used to encapsulate a numerical calculation operator, Paddle Serving is used to represent a basic business operation operator, and the core interface is inference. OP configures its dependent upstream OP to connect multiple OPs into a workflow
- **Channel**: An abstraction of all request-level intermediate data of the OP; data exchange between OPs through Channels
- **Bus**: manages all channels in a thread, and schedules the access relationship between the two sets of OP and Channel according to the DAG dependency graph between DAGs
- **Stage**: Workflow according to the topology diagram described by DAG, a collection of OPs that belong to the same link and can be executed in parallel
- **Node**: An OP operator instance composed of an OP operator class combined with parameter configuration, which is also an execution unit in Workflow
- **Workflow**: executes the inference interface of each OP in order according to the topology described by DAG
- **DAG/Workflow**: consists of several interdependent Nodes. Each Node can obtain the Request object through a specific interface. The node Op obtains the output object of its pre-op through the dependency relationship. The output of the last Node is the Response object by default.
- **Service**: encapsulates a pv request, can configure several Workflows, reuse the current PV's Request object with each other, and then execute each in parallel/serial execution, and finally write the Response to the corresponding output slot; a Paddle-serving process Multiple sets of Service interfaces can be configured. The upstream determines the Service interface currently accessed based on the ServiceName.
## 3. Python Interface Design
### 3.1 Core Targets:
A set of Paddle Serving dynamic library, support the remote estimation service of the common model saved by Paddle, and call the various underlying functions of PaddleServing through the Python Interface.
### 3.2 General Model:
Models that can be predicted using the Paddle Inference Library, models saved during training, including Feed Variable and Fetch Variable
### 3.3 Overall design:
- The user starts the Client and Server through the Python Client. The Python API has a function to check whether the interconnection and the models to be accessed match.
- The Python API calls the pybind corresponding to the client and server functions implemented by Paddle Serving, and the information transmitted through RPC is implemented through RPC.
- The Client Python API currently has two simple functions, load_inference_conf and predict, which are used to perform loading of the model to be predicted and prediction, respectively.
- The Server Python API is mainly responsible for loading the inference model and generating various configurations required by Paddle Serving, including engines, workflow, resources, etc.
### 3.4 Server Inferface
![Server Interface](../server_interface.png)
### 3.5 Client Interface
<img src='../client_inferface.png' width = "600" height = "200">
### 3.6 Client io used during Training
PaddleServing is designed to saves the model interface that can be used during the training process, which is basically the same as the Paddle save inference model interface, feed_var_dict and fetch_var_dict
You can alias the input and output variables. The configuration that needs to be read when the serving starts is saved in the client and server storage directories.
``` python
def save_model(server_model_folder,
client_config_folder,
feed_var_dict,
fetch_var_dict,
main_program=None)
```
## 4. Paddle Serving Underlying Framework
![Paddle-Serging Overall Architecture](../framework.png)
**Model Management Framework**: Connects model files of multiple machine learning platforms and provides a unified inference interface
**Business Scheduling Framework**: Abstracts the calculation logic of various different inference models, provides a general DAG scheduling framework, and connects different operators through DAG diagrams to complete a prediction service together. This abstract model allows users to conveniently implement their own calculation logic, and at the same time facilitates operator sharing. (Users build their own forecasting services. A large part of their work is to build DAGs and provide operators.)
**Predict Service**: Encapsulation of the externally provided prediction service interface. Define communication fields with the client through protobuf.
### 4.1 Model Management Framework
The model management framework is responsible for managing the models trained by the machine learning framework. It can be abstracted into three levels: model loading, model data, and model reasoning.
#### Model Loading
Load model from disk to memory, support multi-version, hot-load, incremental update, etc.
#### Model data
Model data structure in memory, integrated fluid inference lib
#### inferencer
it provided united inference interface for upper layers
```C++
class FluidFamilyCore {
virtual bool Run(const void* in_data, void* out_data);
virtual int create(const std::string& data_path);
virtual int clone(void* origin_core);
};
```
### 4.2 Business Scheduling Framework
#### 4.2.1 Inference Service
With reference to the abstract idea of model calculation of the TensorFlow framework, the business logic is abstracted into a DAG diagram, driven by configuration, generating a workflow, and skipping C ++ code compilation. Each specific step of the service corresponds to a specific OP. The OP can configure the upstream OP that it depends on. Unified message passing between OPs is achieved by the thread-level bus and channel mechanisms. For example, the service process of a simple prediction service can be abstracted into 3 steps including reading request data-> calling the prediction interface-> writing back the prediction result, and correspondingly implemented to 3 OP: ReaderOp-> ClassifyOp-> WriteOp
![Infer Service](../predict-service.png)
Regarding the dependencies between OPs, and the establishment of workflows through OPs, you can refer to [从零开始写一个预测服务](CREATING.md) (simplified Chinese Version)
Server instance perspective
![Server instance perspective](../server-side.png)
#### 4.2.2 Paddle Serving Multi-Service Mechanism
![Paddle Serving multi-service](../multi-service.png)
Paddle Serving instances can load multiple models at the same time, and each model uses a Service (and its configured workflow) to undertake services. You can refer to [service configuration file in Demo example](../../tools/cpp_examples/demo-serving/conf/service.prototxt) to learn how to configure multiple services for the serving instance
#### 4.2.3 Hierarchical relationship of business scheduling
From the client's perspective, a Paddle Serving service can be divided into three levels: Service, Endpoint, and Variant from top to bottom.
![Call hierarchy relationship](../multi-variants.png)
One Service corresponds to one inference model, and there is one endpoint under the model. Different versions of the model are implemented through multiple variant concepts under endpoint:
The same model prediction service can configure multiple variants, and each variant has its own downstream IP list. The client code can configure relative weights for each variant to achieve the relationship of adjusting the traffic ratio (refer to the description of variant_weight_list in [Client Configuration](../CLIENT_CONFIGURE.md) section 3.2).
![Client-side proxy function](../client-side-proxy.png)
## 5. User Interface
Under the premise of meeting certain interface specifications, the service framework does not make any restrictions on user data fields to meet different business interfaces of various forecast services. Baidu-rpc inherits the interface of Protobuf serice, and the user describes the Request and Response business interfaces according to the Protobuf syntax specification. Paddle Serving is built on the Baidu-rpc framework and supports this feature by default.
No matter how the communication protocol changes, the framework only needs to ensure that the communication protocol between the client and server and the format of the business data are synchronized to ensure normal communication. This information can be broken down as follows:
-Protocol: Header information agreed in advance between Server and Client to ensure mutual recognition of data format. Paddle Serving uses Protobuf as the basic communication format
-Data: Used to describe the interface of Request and Response, such as the sample data to be predicted, and the score returned by the prediction. include:
   -Data fields: Field definitions included in the two data structures of Request and Return.
   -Description interface: similar to the protocol interface, it supports Protobuf by default
### 5.1 Data Compression Method
Baidu-rpc has built-in data compression methods such as snappy, gzip, zlib, which can be configured in the configuration file (refer to [Client Configuration](../CLIENT_CONFIGURE.md) Section 3.1 for an introduction to compress_type)
### 5.2 C ++ SDK API Interface
```C++
class PredictorApi {
public:
int create(const char* path, const char* file);
int thrd_initialize();
int thrd_clear();
int thrd_finalize();
void destroy();
Predictor* fetch_predictor(std::string ep_name);
int free_predictor(Predictor* predictor);
};
class Predictor {
public:
// synchronize interface
virtual int inference(google::protobuf::Message* req,
google::protobuf::Message* res) = 0;
// asynchronize interface
virtual int inference(google::protobuf::Message* req,
google::protobuf::Message* res,
DoneType done,
brpc::CallId* cid = NULL) = 0;
// synchronize interface
virtual int debug(google::protobuf::Message* req,
google::protobuf::Message* res,
butil::IOBufBuilder* debug_os) = 0;
};
```
### 5.3 Inferfaces related to Op
```C++
class Op {
// ------Getters for Channel/Data/Message of dependent OP-----
// Get the Channel object of dependent OP
Channel* mutable_depend_channel(const std::string& op);
// Get the Channel object of dependent OP
const Channel* get_depend_channel(const std::string& op) const;
template <typename T>
T* mutable_depend_argument(const std::string& op);
template <typename T>
const T* get_depend_argument(const std::string& op) const;
// -----Getters for Channel/Data/Message of current OP----
// Get pointer to the progobuf message of current OP
google::protobuf::Message* mutable_message();
// Get pointer to the protobuf message of current OP
const google::protobuf::Message* get_message() const;
// Get the template class data object of current OP
template <typename T>
T* mutable_data();
// Get the template class data object of current OP
template <typename T>
const T* get_data() const;
// ---------------- Other base class members ----------------
int init(Bus* bus,
Dag* dag,
uint32_t id,
const std::string& name,
const std::string& type,
void* conf);
int deinit();
int process(bool debug);
// Get the input object
const google::protobuf::Message* get_request_message();
const std::string& type() const;
uint32_t id() const;
// ------------------ OP Interface -------------------
// Get the derived Channel object of current OP
virtual Channel* mutable_channel() = 0;
// Get the derived Channel object of current OP
virtual const Channel* get_channel() const = 0;
// Release the derived Channel object of current OP
virtual int release_channel() = 0;
// Inference interface
virtual int inference() = 0;
// ------------------ Conf Interface -------------------
virtual void* create_config(const configure::DAGNode& conf) { return NULL; }
virtual void delete_config(void* conf) {}
virtual void set_config(void* conf) { return; }
// ------------------ Metric Interface -------------------
virtual void regist_metric() { return; }
};
```
### 5.4 Interfaces related to framework
Service
```C++
class InferService {
public:
static const char* tag() { return "service"; }
int init(const configure::InferService& conf);
int deinit() { return 0; }
int reload();
const std::string& name() const;
const std::string& full_name() const { return _infer_service_format; }
// Execute each workflow serially
virtual int inference(const google::protobuf::Message* request,
google::protobuf::Message* response,
butil::IOBufBuilder* debug_os = NULL);
int debug(const google::protobuf::Message* request,
google::protobuf::Message* response,
butil::IOBufBuilder* debug_os);
};
class ParallelInferService : public InferService {
public:
// Execute workflows in parallel
int inference(const google::protobuf::Message* request,
google::protobuf::Message* response,
butil::IOBufBuilder* debug_os) {
return 0;
}
};
```
ServerManager
```C++
class ServerManager {
public:
typedef google::protobuf::Service Service;
ServerManager();
static ServerManager& instance() {
static ServerManager server;
return server;
}
static bool reload_starting() { return _s_reload_starting; }
static void stop_reloader() { _s_reload_starting = false; }
int add_service_by_format(const std::string& format);
int start_and_wait();
};
```
DAG
```C++
class Dag {
public:
EdgeMode parse_mode(std::string& mode); // NOLINT
int init(const char* path, const char* file, const std::string& name);
int init(const configure::Workflow& conf, const std::string& name);
int deinit();
uint32_t nodes_size();
const DagNode* node_by_id(uint32_t id);
const DagNode* node_by_id(uint32_t id) const;
const DagNode* node_by_name(std::string& name); // NOLINT
const DagNode* node_by_name(const std::string& name) const;
uint32_t stage_size();
const DagStage* stage_by_index(uint32_t index);
const std::string& name() const { return _dag_name; }
const std::string& full_name() const { return _dag_name; }
void regist_metric(const std::string& service_name);
};
```
Workflow
```C++
class Workflow {
public:
Workflow() {}
static const char* tag() { return "workflow"; }
// Each workflow object corresponds to an independent
// configure file, so you can share the object between
// different apps.
int init(const configure::Workflow& conf);
DagView* fetch_dag_view(const std::string& service_name);
int deinit() { return 0; }
void return_dag_view(DagView* view);
int reload();
const std::string& name() { return _name; }
const std::string& full_name() { return _name; }
};
```
# Paddle Serving设计方案
(简体中文|[English](./DESIGN.md))
注意本页内容有已经过期,请查看:[设计文档](https://github.com/PaddlePaddle/Serving/blob/develop/doc/DESIGN_DOC_CN.md)
## 1. 项目背景
PaddlePaddle是百度开源的机器学习框架,广泛支持各种深度学习模型的定制化开发; Paddle Serving是Paddle的在线预测部分,与Paddle模型训练环节无缝衔接,提供机器学习预测云服务。本文将从模型、服务、接入等层面,自底向上描述Paddle Serving设计方案。
1. 模型是Paddle Serving预测的核心,包括模型数据和推理计算的管理;
2. 预测框架封装模型推理计算,对外提供RPC接口,对接不同上游;
3. 预测服务SDK提供一套接入框架
最终形成一套完整的serving解决方案。
## 2. 名词解释
- **baidu-rpc**: 百度官方开源RPC框架,支持多种常见通信协议,提供基于protobuf的自定义接口体验
- **Variant**: Paddle Serving架构对一个最小预测集群的抽象,其特点是内部所有实例(副本)完全同质,逻辑上对应一个model的一个固定版本
- **Endpoint**: 多个Variant组成一个Endpoint,逻辑上看,Endpoint代表一个model,Endpoint内部的Variant代表不同的版本
- **OP**: PaddlePaddle用来封装一种数值计算的算子,Paddle Serving用来表示一种基础的业务操作算子,核心接口是inference。OP通过配置其依赖的上游OP,将多个OP串联成一个workflow
- **Channel**: 一个OP所有请求级中间数据的抽象;OP之间通过Channel进行数据交互
- **Bus**: 对一个线程中所有channel的管理,以及根据DAG之间的DAG依赖图对OP和Channel两个集合间的访问关系进行调度
- **Stage**: Workflow按照DAG描述的拓扑图中,属于同一个环节且可并行执行的OP集合
- **Node**: 由某个OP算子类结合参数配置组成的OP算子实例,也是Workflow中的一个执行单元
- **Workflow**: 按照DAG描述的拓扑,有序执行每个OP的inference接口
- **DAG/Workflow**: 由若干个相互依赖的Node组成,每个Node均可通过特定接口获得Request对象,节点OP通过依赖关系获得其前置OP的输出对象,最后一个Node的输出默认就是Response对象
- **Service**: 对一次PV的请求封装,可配置若干条Workflow,彼此之间复用当前PV的Request对象,然后各自并行/串行执行,最后将Response写入对应的输出slot中;一个Paddle-serving进程可配置多套Service接口,上游根据ServiceName决定当前访问的Service接口。
## 3. Python Interface设计
### 3.1 核心目标:
完成一整套Paddle Serving的动态库,支持Paddle保存的通用模型的远程预估服务,通过Python Interface调用PaddleServing底层的各种功能。
### 3.2 通用模型:
能够使用Paddle Inference Library进行预测的模型,在训练过程中保存的模型,包含Feed Variable和Fetch Variable
### 3.3 整体设计:
- 用户通过Python Client启动Client和Server,Python API有检查互联和待访问模型是否匹配的功能
- Python API背后调用的是Paddle Serving实现的client和server对应功能的pybind,互传的信息通过RPC实现
- Client Python API当前有两个简单的功能,load_inference_conf和predict,分别用来执行加载待预测的模型和预测
- Server Python API主要负责加载预估模型,以及生成Paddle Serving需要的各种配置,包括engines,workflow,resource等
### 3.4 Server Inferface
![Server Interface](../server_interface.png)
### 3.5 Client Interface
<img src='../client_inferface.png' width = "600" height = "200">
### 3.6 训练过程中使用的Client io
PaddleServing设计可以在训练过程中使用的保存模型接口,与Paddle保存inference model的接口基本一致,feed_var_dict与fetch_var_dict
可以为输入和输出变量起别名,serving启动需要读取的配置会保存在client端和server端的保存目录中。
``` python
def save_model(server_model_folder,
client_config_folder,
feed_var_dict,
fetch_var_dict,
main_program=None)
```
## 4. Paddle Serving底层框架
![Paddle-Serging总体框图](../framework.png)
**模型管理框架**:对接多种机器学习平台的模型文件,向上提供统一的inference接口
**业务调度框架**:对各种不同预测模型的计算逻辑进行抽象,提供通用的DAG调度框架,通过DAG图串联不同的算子,共同完成一次预测服务。该抽象模型使用户可以方便的实现自己的计算逻辑,同时便于算子共用。(用户搭建自己的预测服务,很大一部分工作是搭建DAG和提供算子的实现)
**PredictService**:对外部提供的预测服务接口封装。通过protobuf定义与客户端的通信字段。
### 4.1 模型管理框架
模型管理框架负责管理机器学习框架训练出来的模型,总体可抽象成模型加载、模型数据和模型推理等3个层次。
#### 模型加载
将模型从磁盘加载到内存,支持多版本、热加载、增量更新等功能
#### 模型数据
模型在内存中的数据结构,集成fluid预测lib
#### inferencer
向上为预测服务提供统一的预测接口
```C++
class FluidFamilyCore {
virtual bool Run(const void* in_data, void* out_data);
virtual int create(const std::string& data_path);
virtual int clone(void* origin_core);
};
```
### 4.2 业务调度框架
#### 4.2.1 预测服务Service
参考TF框架的模型计算的抽象思想,将业务逻辑抽象成DAG图,由配置驱动,生成workflow,跳过C++代码编译。业务的每个具体步骤,对应一个具体的OP,OP可配置自己依赖的上游OP。OP之间消息传递统一由线程级Bus和channel机制实现。例如,一个简单的预测服务的服务过程,可以抽象成读请求数据->调用预测接口->写回预测结果等3个步骤,相应的实现到3个OP: ReaderOp->ClassifyOp->WriteOp
![预测服务Service](../predict-service.png)
关于OP之间的依赖关系,以及通过OP组建workflow,可以参考[从零开始写一个预测服务](https://github.com/PaddlePaddle/Serving/blob/develop/doc/deprecated/CREATING.md)的相关章节
服务端实例透视图
![服务端实例透视图](../server-side.png)
#### 4.2.2 Paddle Serving的多服务机制
![Paddle Serving的多服务机制](../multi-service.png)
Paddle Serving实例可以同时加载多个模型,每个模型用一个Service(以及其所配置的workflow)承接服务。可以参考[Demo例子中的service配置文件](../..//tools/cpp_examples/demo-serving/conf/service.prototxt)了解如何为serving实例配置多个service
#### 4.2.3 业务调度层级关系
从客户端看,一个Paddle Serving service从顶向下可分为Service, Endpoint, Variant等3个层级
![调用层级关系](../multi-variants.png)
一个Service对应一个预测模型,模型下有1个endpoint。模型的不同版本,通过endpoint下多个variant概念实现:
同一个模型预测服务,可以配置多个variant,每个variant有自己的下游IP列表。客户端代码可以对各个variant配置相对权重,以达到调节流量比例的关系(参考[客户端配置](../CLIENT_CONFIGURE.md)第3.2节中关于variant_weight_list的说明)。
![Client端proxy功能](../client-side-proxy.png)
## 5. 用户接口
在满足一定的接口规范前提下,服务框架不对用户数据字段做任何约束,以应对各种预测服务的不同业务接口。Baidu-rpc继承了Protobuf serice的接口,用户按照Protobuf语法规范描述Request和Response业务接口。Paddle Serving基于Baidu-rpc框架搭建,默认支持该特性。
无论通信协议如何变化,框架只需确保Client和Server间通信协议和业务数据两种信息的格式同步,即可保证正常通信。这些信息又可细分如下:
- 协议:Server和Client之间事先约定的、确保相互识别数据格式的包头信息。Paddle Serving用Protobuf作为基础通信格式
- 数据:用来描述Request和Response的接口,例如待预测样本数据,和预测返回的打分。包括:
- 数据字段:请求包Request和返回包Response两种数据结构包含的字段定义
- 描述接口:跟协议接口类似,默认支持Protobuf
### 5.1 数据压缩方法
Baidu-rpc内置了snappy, gzip, zlib等数据压缩方法,可在配置文件中配置(参考[客户端配置](../CLIENT_CONFIGURE.md)第3.1节关于compress_type的介绍)
### 5.2 C++ SDK API接口
```C++
class PredictorApi {
public:
int create(const char* path, const char* file);
int thrd_initialize();
int thrd_clear();
int thrd_finalize();
void destroy();
Predictor* fetch_predictor(std::string ep_name);
int free_predictor(Predictor* predictor);
};
class Predictor {
public:
// synchronize interface
virtual int inference(google::protobuf::Message* req,
google::protobuf::Message* res) = 0;
// asynchronize interface
virtual int inference(google::protobuf::Message* req,
google::protobuf::Message* res,
DoneType done,
brpc::CallId* cid = NULL) = 0;
// synchronize interface
virtual int debug(google::protobuf::Message* req,
google::protobuf::Message* res,
butil::IOBufBuilder* debug_os) = 0;
};
```
### 5.3 OP相关接口
```C++
class Op {
// ------Getters for Channel/Data/Message of dependent OP-----
// Get the Channel object of dependent OP
Channel* mutable_depend_channel(const std::string& op);
// Get the Channel object of dependent OP
const Channel* get_depend_channel(const std::string& op) const;
template <typename T>
T* mutable_depend_argument(const std::string& op);
template <typename T>
const T* get_depend_argument(const std::string& op) const;
// -----Getters for Channel/Data/Message of current OP----
// Get pointer to the progobuf message of current OP
google::protobuf::Message* mutable_message();
// Get pointer to the protobuf message of current OP
const google::protobuf::Message* get_message() const;
// Get the template class data object of current OP
template <typename T>
T* mutable_data();
// Get the template class data object of current OP
template <typename T>
const T* get_data() const;
// ---------------- Other base class members ----------------
int init(Bus* bus,
Dag* dag,
uint32_t id,
const std::string& name,
const std::string& type,
void* conf);
int deinit();
int process(bool debug);
// Get the input object
const google::protobuf::Message* get_request_message();
const std::string& type() const;
uint32_t id() const;
// ------------------ OP Interface -------------------
// Get the derived Channel object of current OP
virtual Channel* mutable_channel() = 0;
// Get the derived Channel object of current OP
virtual const Channel* get_channel() const = 0;
// Release the derived Channel object of current OP
virtual int release_channel() = 0;
// Inference interface
virtual int inference() = 0;
// ------------------ Conf Interface -------------------
virtual void* create_config(const configure::DAGNode& conf) { return NULL; }
virtual void delete_config(void* conf) {}
virtual void set_config(void* conf) { return; }
// ------------------ Metric Interface -------------------
virtual void regist_metric() { return; }
};
```
### 5.4 框架相关接口
Service
```C++
class InferService {
public:
static const char* tag() { return "service"; }
int init(const configure::InferService& conf);
int deinit() { return 0; }
int reload();
const std::string& name() const;
const std::string& full_name() const { return _infer_service_format; }
// Execute each workflow serially
virtual int inference(const google::protobuf::Message* request,
google::protobuf::Message* response,
butil::IOBufBuilder* debug_os = NULL);
int debug(const google::protobuf::Message* request,
google::protobuf::Message* response,
butil::IOBufBuilder* debug_os);
};
class ParallelInferService : public InferService {
public:
// Execute workflows in parallel
int inference(const google::protobuf::Message* request,
google::protobuf::Message* response,
butil::IOBufBuilder* debug_os) {
return 0;
}
};
```
ServerManager
```C++
class ServerManager {
public:
typedef google::protobuf::Service Service;
ServerManager();
static ServerManager& instance() {
static ServerManager server;
return server;
}
static bool reload_starting() { return _s_reload_starting; }
static void stop_reloader() { _s_reload_starting = false; }
int add_service_by_format(const std::string& format);
int start_and_wait();
};
```
DAG
```C++
class Dag {
public:
EdgeMode parse_mode(std::string& mode); // NOLINT
int init(const char* path, const char* file, const std::string& name);
int init(const configure::Workflow& conf, const std::string& name);
int deinit();
uint32_t nodes_size();
const DagNode* node_by_id(uint32_t id);
const DagNode* node_by_id(uint32_t id) const;
const DagNode* node_by_name(std::string& name); // NOLINT
const DagNode* node_by_name(const std::string& name) const;
uint32_t stage_size();
const DagStage* stage_by_index(uint32_t index);
const std::string& name() const { return _dag_name; }
const std::string& full_name() const { return _dag_name; }
void regist_metric(const std::string& service_name);
};
```
Workflow
```C++
class Workflow {
public:
Workflow() {}
static const char* tag() { return "workflow"; }
// Each workflow object corresponds to an independent
// configure file, so you can share the object between
// different apps.
int init(const configure::Workflow& conf);
DagView* fetch_dag_view(const std::string& service_name);
int deinit() { return 0; }
void return_dag_view(DagView* view);
int reload();
const std::string& name() { return _name; }
const std::string& full_name() { return _name; }
};
```
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册