host.md 2.7 KB
Newer Older
1 2
# Landing OpenVINO™ Model Server on Bare Metal Hosts and Virtual Machines

T
Trawinski, Dariusz 已提交
3 4 5 6 7 8 9 10 11 12
## Introduction
OpenVINO™ Model Server includes a C++ implementation of gRPC and RESTful API interfaces defined by Tensorflow serving. 
In the backend it uses Inference Engine libraries from OpenVINO™ toolkit, which speeds up the execution on CPU, and enables it on iGPU and Movidius devices.

OpenVINO™ Model Server can be hosted on a bare metal server, virtual machine or inside a docker container. It is also suitable for landing in Kubernetes environment.

## System Requirements

#### Operating Systems 

13
We are testing OpenVINO Model Server execution on baremetal on Ubuntu 20.04.x
14 15 16

For other operating systems we recommend using [OVMS docker containers](./docker_container.md).

17

18 19 20 21 22
#### Hardware 

Check out [supported configurations](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_supported_plugins_Supported_Devices.html).

Look at VPU Plugins to see if your model is supported and use [OpenVINO Model Optimizer](https://software.intel.com/en-us/articles/OpenVINO-ModelOptimizer) and convert your model to the OpenVINO format.
T
Trawinski, Dariusz 已提交
23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43



## Model Server Installation<a name="model-server-installation"></a>
1. Clone model server git repository using command :
   ```Bash
   git clone https://github.com/openvinotoolkit/model_server
   ```

2. Navigate to model server directory using command :
   ```Bash
   cd model_server
   ```
3. To install Model Server, you could use precompiled version or built it on your own inside a docker container. Build a docker container with automated steps using the command :
   ```Bash
   make docker_build
   ````
4. The `make docker_build` target will also make a copy of the binary package in a dist subfolder in the model server root directory.

5. Navigate to the folder containing binary package and unpack the included tar.gz file using the command :
   ```Bash
44
   cd dist/ubuntu && tar -xzvf ovms.tar.gz
T
Trawinski, Dariusz 已提交
45 46
   ```

47 48
## Running the Serving
1. The server can be started using the command in the folder, where OVMS was installed: 
T
Trawinski, Dariusz 已提交
49
```Bash
50
./ovms/bin/ovms --help
51
```
T
Trawinski, Dariusz 已提交
52
2. The server can be started in interactive mode, as  a background process or a daemon initiated by ```systemctl/initd``` depending on the Linux distribution and specific hosting requirements.
53

54
Refer to [Running Model Server using Docker Container](./docker_container.md) to get more details about the ovms parameters and configuration.
55 56


57 58
**Note** When AI accelerators are used for inference execution, there might be needed additional steps to install their drivers and dependencies. 
Learn more about it on [OpenVINO installation guide](https://docs.openvinotoolkit.org/latest/openvino_docs_install_guides_installing_openvino_linux.html).
T
Trawinski, Dariusz 已提交
59

60

T
Trawinski, Dariusz 已提交
61