DESIGN.md 16.1 KB
Newer Older
J
Jiawei Wang 已提交
1
# Paddle Serving Design
W
wangguibao 已提交
2

J
Jiawei Wang 已提交
3
([简体中文](./DESIGN_CN.md)|English)
W
wangguibao 已提交
4

J
Jiawei Wang 已提交
5
## 1. Background
W
wangguibao 已提交
6

J
Jiawei Wang 已提交
7
PaddlePaddle is the Baidu's open source machine learning framework, which supports a wide range of customized development of deep learning models; Paddle serving is the online prediction framework of Paddle, which seamlessly connects with Paddle model training, and provides cloud services for machine learning prediction. This article will describe the Paddle Serving design from the bottom up, from the model, service, and access levels.
W
wangguibao 已提交
8

J
Jiawei Wang 已提交
9 10 11
1. The model is the core of Paddle Serving prediction, including the management of model data and inference calculations;
2. Prediction framework encapsulation model for inference calculations, providing external RPC interface to connect different upstream
3. The prediction service SDK provides a set of access frameworks
W
wangguibao 已提交
12

J
Jiawei Wang 已提交
13
The result is a complete serving solution.
W
wangguibao 已提交
14

J
Jiawei Wang 已提交
15
## 2. Terms explanation
W
wangguibao 已提交
16

J
Jiawei Wang 已提交
17 18 19 20 21 22 23 24 25 26 27
- **baidu-rpc**: Baidu's official open source RPC framework, supports multiple common communication protocols, and provides a custom interface experience based on protobuf
- **Variant**: Paddle Serving architecture is an abstraction of a minimal prediction cluster, which is characterized by all internal instances (replicas) being completely homogeneous and logically corresponding to a fixed version of a model
- **Endpoint**: Multiple Variants form an Endpoint. Logically, Endpoint represents a model, and Variants within the Endpoint represent different versions.
- **OP**: PaddlePaddle is used to encapsulate a numerical calculation operator, Paddle Serving is used to represent a basic business operation operator, and the core interface is inference. OP configures its dependent upstream OP to connect multiple OPs into a workflow
- **Channel**: An abstraction of all request-level intermediate data of the OP; data exchange between OPs through Channels
- **Bus**: manages all channels in a thread, and schedules the access relationship between the two sets of OP and Channel according to the DAG dependency graph between DAGs
- **Stage**: Workflow according to the topology diagram described by DAG, a collection of OPs that belong to the same link and can be executed in parallel
- **Node**: An OP operator instance composed of an OP operator class combined with parameter configuration, which is also an execution unit in Workflow
- **Workflow**: executes the inference interface of each OP in order according to the topology described by DAG
- **DAG/Workflow**: consists of several interdependent Nodes. Each Node can obtain the Request object through a specific interface. The node Op obtains the output object of its pre-op through the dependency relationship. The output of the last Node is the Response object by default.
- **Service**: encapsulates a pv request, can configure several Workflows, reuse the current PV's Request object with each other, and then execute each in parallel/serial execution, and finally write the Response to the corresponding output slot; a Paddle-serving process Multiple sets of Service interfaces can be configured. The upstream determines the Service interface currently accessed based on the ServiceName.
D
Dong Daxiang 已提交
28

J
Jiawei Wang 已提交
29
## 3. Python Interface Design
D
Dong Daxiang 已提交
30

J
Jiawei Wang 已提交
31
### 3.1 Core Targets:
D
Dong Daxiang 已提交
32

J
Jiawei Wang 已提交
33
A set of Paddle Serving dynamic library, support the remote estimation service of the common model saved by Paddle, and call the various underlying functions of PaddleServing through the Python Interface.
D
Dong Daxiang 已提交
34

J
Jiawei Wang 已提交
35
### 3.2 General Model:
D
Dong Daxiang 已提交
36

J
Jiawei Wang 已提交
37
Models that can be predicted using the Paddle Inference Library, models saved during training, including Feed Variable and Fetch Variable
D
Dong Daxiang 已提交
38

J
Jiawei Wang 已提交
39 40
### 3.3 Overall design:

J
Jiawei Wang 已提交
41 42 43 44
- The user starts the Client and Server through the Python Client. The Python API has a function to check whether the interconnection and the models to be accessed match.
- The Python API calls the pybind corresponding to the client and server functions implemented by Paddle Serving, and the information transmitted through RPC is implemented through RPC.
- The Client Python API currently has two simple functions, load_inference_conf and predict, which are used to perform loading of the model to be predicted and prediction, respectively.
- The Server Python API is mainly responsible for loading the inference model and generating various configurations required by Paddle Serving, including engines, workflow, resources, etc.
D
Dong Daxiang 已提交
45 46 47

### 3.4 Server Inferface

T
TeslaZhao 已提交
48
![Server Interface](../server_interface.png)
D
Dong Daxiang 已提交
49

D
Dong Daxiang 已提交
50
### 3.5 Client Interface
D
Dong Daxiang 已提交
51

T
TeslaZhao 已提交
52
<img src='../client_inferface.png' width = "600" height = "200">
D
Dong Daxiang 已提交
53

J
Jiawei Wang 已提交
54
### 3.6 Client io used during Training
D
Dong Daxiang 已提交
55

J
Jiawei Wang 已提交
56 57
PaddleServing is designed to saves the model interface that can be used during the training process, which is basically the same as the Paddle save inference model interface, feed_var_dict and fetch_var_dict
You can alias the input and output variables. The configuration that needs to be read when the serving starts is saved in the client and server storage directories.
D
Dong Daxiang 已提交
58 59 60 61 62 63 64 65

``` python
def save_model(server_model_folder,
               client_config_folder,
               feed_var_dict,
               fetch_var_dict,
               main_program=None)
```
D
Dong Daxiang 已提交
66

J
Jiawei Wang 已提交
67
## 4. Paddle Serving Underlying Framework
W
wangguibao 已提交
68

T
TeslaZhao 已提交
69
![Paddle-Serging Overall Architecture](../framework.png)
W
wangguibao 已提交
70

J
Jiawei Wang 已提交
71
**Model Management Framework**: Connects model files of multiple machine learning platforms and provides a unified inference interface
J
Jiawei Wang 已提交
72 73
**Business Scheduling Framework**: Abstracts the calculation logic of various different inference models, provides a general DAG scheduling framework, and connects different operators through DAG diagrams to complete a prediction service together. This abstract model allows users to conveniently implement their own calculation logic, and at the same time facilitates operator sharing. (Users build their own forecasting services. A large part of their work is to build DAGs and provide operators.)
**Predict Service**: Encapsulation of the externally provided prediction service interface. Define communication fields with the client through protobuf.
W
wangguibao 已提交
74

J
Jiawei Wang 已提交
75
### 4.1 Model Management Framework
W
wangguibao 已提交
76

J
Jiawei Wang 已提交
77
The model management framework is responsible for managing the models trained by the machine learning framework. It can be abstracted into three levels: model loading, model data, and model reasoning.
W
wangguibao 已提交
78

J
Jiawei Wang 已提交
79
#### Model Loading
W
wangguibao 已提交
80

J
Jiawei Wang 已提交
81
Load model from disk to memory, support multi-version, hot-load, incremental update, etc.
W
wangguibao 已提交
82

J
Jiawei Wang 已提交
83
#### Model data
W
wangguibao 已提交
84

J
Jiawei Wang 已提交
85
Model data structure in memory, integrated fluid inference lib
W
wangguibao 已提交
86 87 88

#### inferencer

J
Jiawei Wang 已提交
89
it provided united inference interface for upper layers
W
wangguibao 已提交
90 91 92 93 94 95 96 97 98

```C++
class FluidFamilyCore {
  virtual bool Run(const void* in_data, void* out_data);
  virtual int create(const std::string& data_path);
  virtual int clone(void* origin_core);
};
```

J
Jiawei Wang 已提交
99
### 4.2 Business Scheduling Framework
W
wangguibao 已提交
100

J
Jiawei Wang 已提交
101
#### 4.2.1 Inference Service
W
wangguibao 已提交
102

J
Jiawei Wang 已提交
103
With reference to the abstract idea of model calculation of the TensorFlow framework, the business logic is abstracted into a DAG diagram, driven by configuration, generating a workflow, and skipping C ++ code compilation. Each specific step of the service corresponds to a specific OP. The OP can configure the upstream OP that it depends on. Unified message passing between OPs is achieved by the thread-level bus and channel mechanisms. For example, the service process of a simple prediction service can be abstracted into 3 steps including reading request data-> calling the prediction interface-> writing back the prediction result, and correspondingly implemented to 3 OP: ReaderOp-> ClassifyOp-> WriteOp
W
wangguibao 已提交
104

T
TeslaZhao 已提交
105
![Infer Service](../predict-service.png)
W
wangguibao 已提交
106

T
TeslaZhao 已提交
107
Regarding the dependencies between OPs, and the establishment of workflows through OPs, you can refer to [从零开始写一个预测服务](CREATING.md) (simplified Chinese Version)
W
wangguibao 已提交
108

J
Jiawei Wang 已提交
109
Server instance perspective
W
wangguibao 已提交
110

T
TeslaZhao 已提交
111
![Server instance perspective](../server-side.png)
W
wangguibao 已提交
112 113


J
Jiawei Wang 已提交
114
#### 4.2.2 Paddle Serving Multi-Service Mechanism
W
wangguibao 已提交
115

T
TeslaZhao 已提交
116
![Paddle Serving multi-service](../multi-service.png)
W
wangguibao 已提交
117

T
TeslaZhao 已提交
118
Paddle Serving instances can load multiple models at the same time, and each model uses a Service (and its configured workflow) to undertake services. You can refer to [service configuration file in Demo example](../../tools/cpp_examples/demo-serving/conf/service.prototxt) to learn how to configure multiple services for the serving instance
W
wangguibao 已提交
119

J
Jiawei Wang 已提交
120
#### 4.2.3 Hierarchical relationship of business scheduling
W
wangguibao 已提交
121

J
Jiawei Wang 已提交
122
From the client's perspective, a Paddle Serving service can be divided into three levels: Service, Endpoint, and Variant from top to bottom.
W
wangguibao 已提交
123

T
TeslaZhao 已提交
124
![Call hierarchy relationship](../multi-variants.png)
W
wangguibao 已提交
125

J
Jiawei Wang 已提交
126
One Service corresponds to one inference model, and there is one endpoint under the model. Different versions of the model are implemented through multiple variant concepts under endpoint:
T
TeslaZhao 已提交
127
The same model prediction service can configure multiple variants, and each variant has its own downstream IP list. The client code can configure relative weights for each variant to achieve the relationship of adjusting the traffic ratio (refer to the description of variant_weight_list in [Client Configuration](../CLIENT_CONFIGURE.md) section 3.2).
W
wangguibao 已提交
128

T
TeslaZhao 已提交
129
![Client-side proxy function](../client-side-proxy.png)
W
wangguibao 已提交
130

J
Jiawei Wang 已提交
131
## 5. User Interface
W
wangguibao 已提交
132

J
Jiawei Wang 已提交
133
Under the premise of meeting certain interface specifications, the service framework does not make any restrictions on user data fields to meet different business interfaces of various forecast services. Baidu-rpc inherits the interface of Protobuf serice, and the user describes the Request and Response business interfaces according to the Protobuf syntax specification. Paddle Serving is built on the Baidu-rpc framework and supports this feature by default.
W
wangguibao 已提交
134

J
Jiawei Wang 已提交
135
No matter how the communication protocol changes, the framework only needs to ensure that the communication protocol between the client and server and the format of the business data are synchronized to ensure normal communication. This information can be broken down as follows:
W
wangguibao 已提交
136

J
Jiawei Wang 已提交
137 138 139 140
-Protocol: Header information agreed in advance between Server and Client to ensure mutual recognition of data format. Paddle Serving uses Protobuf as the basic communication format
-Data: Used to describe the interface of Request and Response, such as the sample data to be predicted, and the score returned by the prediction. include:
   -Data fields: Field definitions included in the two data structures of Request and Return.
   -Description interface: similar to the protocol interface, it supports Protobuf by default
W
wangguibao 已提交
141

J
Jiawei Wang 已提交
142
### 5.1 Data Compression Method
W
wangguibao 已提交
143

T
TeslaZhao 已提交
144
Baidu-rpc has built-in data compression methods such as snappy, gzip, zlib, which can be configured in the configuration file (refer to [Client Configuration](../CLIENT_CONFIGURE.md) Section 3.1 for an introduction to compress_type)
W
wangguibao 已提交
145

J
Jiawei Wang 已提交
146
### 5.2 C ++ SDK API Interface
W
wangguibao 已提交
147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180

```C++
class PredictorApi {
 public:
  int create(const char* path, const char* file);
  int thrd_initialize();
  int thrd_clear();
  int thrd_finalize();
  void destroy();

  Predictor* fetch_predictor(std::string ep_name);
  int free_predictor(Predictor* predictor);
};

class Predictor {
 public:
  // synchronize interface
  virtual int inference(google::protobuf::Message* req,
                        google::protobuf::Message* res) = 0;

  // asynchronize interface
  virtual int inference(google::protobuf::Message* req,
                        google::protobuf::Message* res,
                        DoneType done,
                        brpc::CallId* cid = NULL) = 0;

  // synchronize interface
  virtual int debug(google::protobuf::Message* req,
                    google::protobuf::Message* res,
                    butil::IOBufBuilder* debug_os) = 0;
};

```

J
Jiawei Wang 已提交
181
### 5.3 Inferfaces related to Op
W
wangguibao 已提交
182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262

```C++
class Op {
  // ------Getters for Channel/Data/Message of dependent OP-----

  // Get the Channel object of dependent OP
  Channel* mutable_depend_channel(const std::string& op);

  // Get the Channel object of dependent OP
  const Channel* get_depend_channel(const std::string& op) const;

  template <typename T>
  T* mutable_depend_argument(const std::string& op);

  template <typename T>
  const T* get_depend_argument(const std::string& op) const;

  // -----Getters for Channel/Data/Message of current OP----

  // Get pointer to the progobuf message of current OP
  google::protobuf::Message* mutable_message();

  // Get pointer to the protobuf message of current OP
  const google::protobuf::Message* get_message() const;

  // Get the template class data object of current OP
  template <typename T>
  T* mutable_data();

  // Get the template class data object of current OP
  template <typename T>
  const T* get_data() const;

  // ---------------- Other base class members ----------------

  int init(Bus* bus,
           Dag* dag,
           uint32_t id,
           const std::string& name,
           const std::string& type,
           void* conf);

  int deinit();


  int process(bool debug);

  // Get the input object
  const google::protobuf::Message* get_request_message();

  const std::string& type() const;

  uint32_t id() const;

  // ------------------ OP Interface -------------------

  // Get the derived Channel object of current OP
  virtual Channel* mutable_channel() = 0;

  // Get the derived Channel object of current OP
  virtual const Channel* get_channel() const = 0;

  // Release the derived Channel object of current OP
  virtual int release_channel() = 0;

  // Inference interface
  virtual int inference() = 0;

  // ------------------ Conf Interface -------------------
  virtual void* create_config(const configure::DAGNode& conf) { return NULL; }

  virtual void delete_config(void* conf) {}

  virtual void set_config(void* conf) { return; }

  // ------------------ Metric Interface -------------------
  virtual void regist_metric() { return; }
};

```

D
dongdaxiang 已提交
263

J
Jiawei Wang 已提交
264
### 5.4 Interfaces related to framework
W
wangguibao 已提交
265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378

Service

```C++
class InferService {
 public:
  static const char* tag() { return "service"; }
  int init(const configure::InferService& conf);
  int deinit() { return 0; }
  int reload();
  const std::string& name() const;
  const std::string& full_name() const { return _infer_service_format; }

  // Execute each workflow serially
  virtual int inference(const google::protobuf::Message* request,
                        google::protobuf::Message* response,
                        butil::IOBufBuilder* debug_os = NULL);

  int debug(const google::protobuf::Message* request,
            google::protobuf::Message* response,
            butil::IOBufBuilder* debug_os);

};

class ParallelInferService : public InferService {
 public:
  // Execute workflows in parallel
  int inference(const google::protobuf::Message* request,
                google::protobuf::Message* response,
                butil::IOBufBuilder* debug_os) {
    return 0;
  }
};
```
ServerManager

```C++
class ServerManager {
 public:
  typedef google::protobuf::Service Service;
  ServerManager();

  static ServerManager& instance() {
    static ServerManager server;
    return server;
  }
  static bool reload_starting() { return _s_reload_starting; }
  static void stop_reloader() { _s_reload_starting = false; }
  int add_service_by_format(const std::string& format);
  int start_and_wait();
};
```

DAG

```C++
class Dag {
 public:
  EdgeMode parse_mode(std::string& mode);  // NOLINT

  int init(const char* path, const char* file, const std::string& name);

  int init(const configure::Workflow& conf, const std::string& name);

  int deinit();

  uint32_t nodes_size();

  const DagNode* node_by_id(uint32_t id);

  const DagNode* node_by_id(uint32_t id) const;

  const DagNode* node_by_name(std::string& name);  // NOLINT

  const DagNode* node_by_name(const std::string& name) const;

  uint32_t stage_size();

  const DagStage* stage_by_index(uint32_t index);

  const std::string& name() const { return _dag_name; }

  const std::string& full_name() const { return _dag_name; }

  void regist_metric(const std::string& service_name);
};
```

Workflow

```C++
class Workflow {
 public:
  Workflow() {}
  static const char* tag() { return "workflow"; }

  // Each workflow object corresponds to an independent
  // configure file, so you can share the object between
  // different apps.
  int init(const configure::Workflow& conf);

  DagView* fetch_dag_view(const std::string& service_name);

  int deinit() { return 0; }

  void return_dag_view(DagView* view);

  int reload();

  const std::string& name() { return _name; }

  const std::string& full_name() { return _name; }
};
```