@@ -37,8 +37,9 @@ We consider deploying deep learning inference service online to be a user-facing
...
@@ -37,8 +37,9 @@ We consider deploying deep learning inference service online to be a user-facing
We highly recommend you to run Paddle Serving in Docker, please visit [Run in Docker](https://github.com/PaddlePaddle/Serving/blob/develop/doc/RUN_IN_DOCKER.md)
We highly recommend you to run Paddle Serving in Docker, please visit [Run in Docker](https://github.com/PaddlePaddle/Serving/blob/develop/doc/RUN_IN_DOCKER.md)