提交 bec4aeda 编写于 作者: H helinwang 提交者: GitHub

Merge pull request #391 from helinwang/fix

Add serve API notes
......@@ -5,6 +5,7 @@ import paddle.v2 as paddle
with_gpu = os.getenv('WITH_GPU', '0') != '0'
def softmax_regression(img):
predict = paddle.layer.fc(
input=img, size=10, act=paddle.activation.Softmax())
......
......@@ -21,6 +21,7 @@ from resnet import resnet_cifar10
with_gpu = os.getenv('WITH_GPU', '0') != '0'
def main():
datadim = 3 * 32 * 32
classdim = 10
......
......@@ -5,6 +5,7 @@ import os
with_gpu = os.getenv('WITH_GPU', '0') != '0'
def get_usr_combined_features():
uid = paddle.layer.data(
name='user_id',
......
......@@ -17,6 +17,7 @@ import paddle.v2 as paddle
with_gpu = os.getenv('WITH_GPU', '0') != '0'
def convolution_net(input_dim, class_dim=2, emb_dim=128, hid_dim=128):
data = paddle.layer.data("word",
paddle.data_type.integer_value_sequence(input_dim))
......
......@@ -4,6 +4,7 @@ import paddle.v2 as paddle
with_gpu = os.getenv('WITH_GPU', '0') != '0'
def save_model(parameters, save_path):
with open(save_path, 'w') as f:
parameters.to_tar(f)
......
......@@ -6,7 +6,14 @@ PaddlePaddle. It provides an HTTP endpoint.
## Run
The inference server reads a trained model (a topology file and a
parameter file) and serves HTTP requests at port `8000`.
parameter file) and serves HTTP request at port `8000`. Because models
differ in the numbers and types of inputs, **the HTTP API will differ
slightly for each model,** please see [HTTP API](#http-api) for the
API spec,
and
[here](https://github.com/PaddlePaddle/book/wiki/PaddlePaddle-Book-pretrained-model) for
the request examples of different models that illustrate the
difference.
We will first show how to obtain the PaddlePaddle model, and then how
to start the server.
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册