@@ -8,27 +8,26 @@ Note: The content of this SIG follows the convention described in OpenHarmony's
### work goals
MindSpore Lite is an ultra-fast, intelligent, and simplified AI engine that enables intelligent applications in all scenarios, provides E2E solutions for users, and helps users enable AI capabilities. For more information, please see [MindSpore Lite official website](https://www.mindspore.cn/lite). MindSpore SIG not only needs to provide users with basic training and inference services, more importantly, in order to expand the ecosystem, we need to cooperate with developers and assist them in contributing their code.
AI subsystem is a key subsystem on OpenHarmony. It provides an On-Device inference framework and AI capability/service interfaces. The inference framework efficiently integrates hardware computing resources in the southbond, shields underlying differences for AI application developers in the northbound, and unifies inference interfaces. AI capabilities/service interfaces have built-in general AI capabilities to provide AI application developers with out-of-the-box AI capailities. The AI subsystem integrates the AI technology stack, which simplifies the development and maintenance process of AI applications.
### work scope
- Model converter
The MindSpore Lite model converter tool provides the converter of TensorFlow, TensorFlow Lite, Caffe, ONNX to MindSpore Lite model, fusion and quantization could be introduced during convert procedure.
- AI capaility/service interface
- Training
AI capability/service interfaces are classified into AI capability and AI service interfaces. AI capability interfaces encapsulate AI models and provide out-of-the-box AI capabilities for AI application developers, simplifying the AI application development processs. AI service interfaces allow users or third-party capability providers to service customized AI capabilities, enabling AI application developers.
Support small samples, migration, and incremental training on the device to achieve a personalized AI experience.
- MindSpore
- Inference
MindSpore is an ultra-fast, intelligent, and simplified AI engine that enables intelligent applications in all scenarios, provides E2E solutions for users, and helps users enable AI capabilities. For more information, please see [MindSpore official website](https://www.mindspore.cn/lite). MindSpore SIG not only needs to provide users with basic training and inference services, more importantly, in order to expand the ecosystem, we need to cooperate with developers and assist them in contributing their code.
Load the model and perform inference. Inference is the process of running input data through the model to get output.
- Neural Network Runtime
- Special AI chip support
Neural Network Runtime is an important bridge between On-Device inference framework and AI chips. It unifies the northbound and southbound API of inference. The northbound native API provides unified IR Building online, Model Compilation, and Inference function for AI inference framework. The southbound HDI interface is open for hardware vendors, who can connect AI chips to OpenHarmony through HDI interface, to build a rich OpenHarmony AI southbound ecosystem.
Support Special AI chip to connect to MindSpore Lite.