MegEngine is a fast, scalable and easy-to-use deep learning framework, with auto-differentiation.
MegEngine is a fast, scalable and easy-to-use deep learning framework with 3 key features.
***Unified core for both training and inference**
* You can represent quantization/dynamic shape/image pre-processing and even derivation in one model.
* After training, just put everything into your model and inference it on any platform at ease. Speed and precision problems won't bother you anymore due to the same core inside. Check the usage [here](https://www.megengine.org.cn/doc/stable/zh/user-guide/model-development/traced_module/index.html).
***Lowest hardware requirements helped by algorithms**
* In training, GPU memory usage could go down to one-third at the cost of only one additional line, which enables the [DTR algorithm](https://www.megengine.org.cn/doc/stable/zh/user-guide/model-development/dtr/index.html).
* Gain the lowest memory usage when inferencing a model by leveraging our unique pushdown memory planner
***Inference efficiently on all-platform**
* Inference fast and high-precision on x86/Arm/CUDA/RoCM
* Support Linux/Windows/iOS/Android/TEE...
* Save more memory and optimize speed by leveraging [advanced usage](https://www.megengine.org.cn/doc/stable/zh/user-guide/deployment/lite/advance/index.html)
------
...
...
@@ -54,13 +64,15 @@ We strive to build an open and friendly community. We aim to power humanity with