@@ -10,95 +12,61 @@ MegEngine is a fast, scalable and easy-to-use deep learning framework, with auto
...
@@ -10,95 +12,61 @@ MegEngine is a fast, scalable and easy-to-use deep learning framework, with auto
## Installation
## Installation
**NOTE:** MegEngine now only supports Linux platform with Python 3.5 or higher. On Windows 10 you could try [WSL(Windows Subsystem for Linux)](https://docs.microsoft.com/en-us/windows/wsl) to use Linux within Windows.
**NOTE:** MegEngine now supports Linux-64bit/Windows-64bit/MacOS-10.14+ (CPU-Only) Platforms with Python from 3.5 to 3.8. On Windows 10 you can either install the Linux distribution through [Windows Subsystem for Linux (WSL)](https://docs.microsoft.com/en-us/windows/wsl) or install the Windows distribution directly.
### Binaries
### Binaries
Commands to install from binaries via pip wheels are as follows:
Commands to install from binaries via pip wheels are as follows:
Most of the dependencies of MegEngine are located in `third_party` directory, and you do
Most of the dependencies of MegEngine are located in `third_party` directory, which can be prepared by executing:
not need to install these by yourself. you can prepare these repositories by executing:
```bash
```bash
./third_party/prepare.sh
./third_party/prepare.sh
./third_party/install-mkl.sh
./third_party/install-mkl.sh
```
```
But some dependencies should be manually installed:
But some dependencies need to be Installed manually:
*[CUDA](https://developer.nvidia.com/cuda-toolkit-archive)(>=10.1), [cuDNN](https://developer.nvidia.com/cudnn)(>=7.6)are required when building MegEngine with CUDA support (default ON)
*[CUDA](https://developer.nvidia.com/cuda-toolkit-archive)(>=10.1), [cuDNN](https://developer.nvidia.com/cudnn)(>=7.6)are required when building MegEngine with CUDA support.
*[TensorRT](https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/index.html)(>=5.1.5) is required when building with TensorRT support (default ON)
*[TensorRT](https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/index.html)(>=5.1.5) is required when building with TensorRT support.
* LLVM/Clang(>=6.0) is required when building with Halide JIT support (default ON)
* LLVM/Clang(>=6.0) is required when building with Halide JIT support.
* Python(>=3.5), Numpy, SWIG(>=3.0) are required to build Python modules. (default ON)
* Python(>=3.5), Numpy, are required to build Python modules.
### Build
### Build
MegEngine prefers `Out-Of-Source` flavor, and compile in a `mostly-static` way.
Here are the instructions:
1. Make a directory for the build.
MegEngine uses CMake as the build tool.
```bash
We provide the following scripts to facilitate building.
mkdir-p build
cd build
*[host_build.sh](scripts/cmake-build/host_build.sh) is to build MegEngine targeted to run on the same host machine.
```
Please run the following command to get help information:
```
2. Generate build configurations by `CMake`.
scripts/cmake-build/host_build.sh -h
```
For CUDA build:
*[cross_build_android_arm_inference.sh](scripts/cmake-build/cross_build_android_arm_inference.sh) is to build MegEngine targeted to run at Android-ARM platforms.
```bash
Please run the following command to get help information:
*[cross_build_linux_arm_inference.sh](scripts/cmake-build/cross_build_linux_arm_inference.sh) is to build MegEngine targeted to run at Linux-ARM platforms.
```bash
Please run the following command to get help information: