README.md

    Paddle Serving

    An easy-to-use Machine Learning Model Inference Service Deployment Tool

    Release Issues License Slack

    中文

    Motivation

    Paddle Serving helps deep learning developers deploy an online inference service without much effort. Currently, Paddle Serving supports the deep learning models trained by Paddle althought it can be very easy to integrate other deep learning framework's model inference engine.

    Key Features

    • Integrate with Paddle training pipeline seemlessly, most paddle models can be deployed with one line command.
    • Industrial serving features supported, such as models management, online loading, online A/B testing etc.
    • Distributed Key-Value indexing supported that is especially useful for large scale sparse features as model inputs.
    • Highly concurrent and efficient communication between clients and servers.
    • Multiple programming languages supported on client side, such as Golang, C++ and python

    Installation

    pip install paddle-serving-client
    pip install paddle-serving-server

    Quick Start Example

    wget --no-check-certificate https://paddle-serving.bj.bcebos.com/uci_housing.tar.gz
    tar -xzf uci_housing.tar.gz
    python -m paddle_serving_server.serve --model uci_housing_model --thread 10 --port 9292

    Python Client Request

    from paddle_serving_client import Client
    
    client = Client()
    client.load_client_config("uci_housing_client")
    client.connect(["127.0.0.1:9292"])
    data = [0.0137, -0.1136, 0.2553, -0.0692, 0.0582, -0.0727,
            -0.1583 -0.0584 0.6283 0.4919 0.1856 0.0795 -0.0332]
    fetch_map = client.predict(feed={"x": data[0][0]}, fetch=["price"])
    print(fetch_map)
    

    Document

    Design Doc(Chinese)

    How to config Serving native operators on server side?

    How to develop a new Serving operator

    Client API for other programming languages

    FAQ(Chinese)

    Advanced features and development

    Compile from source code(Chinese)

    Join Community

    To connect with other users and contributors, welcome to join our Slack channel

    Contribution

    If you want to contribute code to Paddle Serving, please reference Contribution Guidelines

    项目简介

    A flexible, high-performance carrier for machine learning models(『飞桨』服务化部署框架)

    🚀 Github 镜像仓库 🚀

    源项目地址

    https://github.com/PaddlePaddle/Serving

    发行版本 14

    Release v0.9.0

    全部发行版

    贡献者 36

    全部贡献者

    开发语言

    • C++ 51.6 %
    • Python 27.0 %
    • Shell 8.0 %
    • CMake 6.0 %
    • Go 4.4 %