提交 b82ed066 编写于 作者: Y yangyongjie 提交者: jinguang

!1242 新建neural_network_runtime仓

* Add neural_network_runtime repo
上级 8bc40346
......@@ -8,27 +8,26 @@ Note: The content of this SIG follows the convention described in OpenHarmony's
### work goals
MindSpore Lite is an ultra-fast, intelligent, and simplified AI engine that enables intelligent applications in all scenarios, provides E2E solutions for users, and helps users enable AI capabilities. For more information, please see [MindSpore Lite official website](https://www.mindspore.cn/lite). MindSpore SIG not only needs to provide users with basic training and inference services, more importantly, in order to expand the ecosystem, we need to cooperate with developers and assist them in contributing their code.
AI subsystem is a key subsystem on OpenHarmony. It provides an On-Device inference framework and AI capability/service interfaces. The inference framework efficiently integrates hardware computing resources in the southbond, shields underlying differences for AI application developers in the northbound, and unifies inference interfaces. AI capabilities/service interfaces have built-in general AI capabilities to provide AI application developers with out-of-the-box AI capailities. The AI subsystem integrates the AI technology stack, which simplifies the development and maintenance process of AI applications.
### work scope
- Model converter
The MindSpore Lite model converter tool provides the converter of TensorFlow, TensorFlow Lite, Caffe, ONNX to MindSpore Lite model, fusion and quantization could be introduced during convert procedure.
- AI capaility/service interface
- Training
AI capability/service interfaces are classified into AI capability and AI service interfaces. AI capability interfaces encapsulate AI models and provide out-of-the-box AI capabilities for AI application developers, simplifying the AI application development processs. AI service interfaces allow users or third-party capability providers to service customized AI capabilities, enabling AI application developers.
Support small samples, migration, and incremental training on the device to achieve a personalized AI experience.
- MindSpore
- Inference
MindSpore is an ultra-fast, intelligent, and simplified AI engine that enables intelligent applications in all scenarios, provides E2E solutions for users, and helps users enable AI capabilities. For more information, please see [MindSpore official website](https://www.mindspore.cn/lite). MindSpore SIG not only needs to provide users with basic training and inference services, more importantly, in order to expand the ecosystem, we need to cooperate with developers and assist them in contributing their code.
Load the model and perform inference. Inference is the process of running input data through the model to get output.
- Neural Network Runtime
- Special AI chip support
Neural Network Runtime is an important bridge between On-Device inference framework and AI chips. It unifies the northbound and southbound API of inference. The northbound native API provides unified IR Building online, Model Compilation, and Inference function for AI inference framework. The southbound HDI interface is open for hardware vendors, who can connect AI chips to OpenHarmony through HDI interface, to build a rich OpenHarmony AI southbound ecosystem.
Support Special AI chip to connect to MindSpore Lite.
- AI Subsystem Architecture
![figures/ai-framework-overview_en.png](figures/ai-framework-overview_en.png)
![figures/ai-framework-arch-en.png](figures/ai-framework-arch-en.png)
### The repository
| Component Name | Component Functionality Description | Component repository name |
......@@ -40,6 +39,7 @@ Support Special AI chip to connect to MindSpore Lite.
- FlatBuffers: https://gitee.com/openharmony/third_party_flatbuffers
- OpenCL-Headers: https://gitee.com/openharmony/third_party_opencl-headers
- OpenCL-CLHPP: https://gitee.com/openharmony-sig/third_party_opencl-clhpp
- Neural Network Runtime: https://gitee.com/openharmony-sig/neural_network_runtime
## SIG Members
......@@ -47,14 +47,16 @@ Support Special AI chip to connect to MindSpore Lite.
- @ivss(https://gitee.com/ivss)
- @zhanghaibo5(https://gitee.com/zhanghaibo5)
- @silenchen(https://gitee.com/silenchen)
### Committers
- @zhaizhiqiang(https://gitee.com/zhaizhiqiang)
- @sunsuodong(https://gitee.com/sunsuodong)
- @zhang_xue_tong(https://gitee.com/zhang_xue_tong)
- @HilbertDavid(https://gitee.com/HilbertDavid)
- @jpc_chenjianping(https://gitee.com/jpc_chenjianping)
- @yangyongjie-boom(https://gitee.com/yangyongjie-boom)
- @jianghui58(https://gitee.com/jianghui58)
### Meetings
- Meeting time:Biweek Monday 19:00, UTC+8
......
......@@ -8,38 +8,39 @@
### 工作目标
MindSpore Lite是一个极速、极智、极简的AI引擎,使能全场景智能应用,为用户提供端到端的解决方案,帮助用户使能AI能力。更多信息,请见[MindSpore Lite官网](https://www.mindspore.cn/lite)。MindSpore SIG不仅需要为用户提供基础的训练和推理服务;更重要的是,为了拓展生态,我们需要与广大开发者合作,协助他们贡献他们的代码上库
AI子系统是OpenHarmony上的关键子系统,提供端侧推理框架和AI原子能力/服务接口,推理框架南向高效整合硬件计算资源,北向对AI应用开发者屏蔽底层差异,统一推理接口;AI原子能力/服务接口内置了通用AI能力,为AI应用开发者提供开盒即用的AI能力。AI子系统整合AI技术栈,有效简化了AI应用的开发和维护流程
### 工作范围
- 模型转换
- AI原子能力/服务接口
MindSpore Lite模型转换工具不仅提供了将TensorFlow、TensorFlow Lite、Caffe、ONNX等模型格式转换为MindSpore Lite模型格式,还提供了算子融合、量化等功能
AI原子能力/服务接口分为能力和服务接口,能力接口是对AI模型的封装,对AI应用开发者提供开盒即用的AI能力,简化AI应用开发的流程和门槛;服务接口支持用户或三方等能力提供者将自定义的AI能力服务化,以服务方式支持能力接口调用,使能AI应用开发者
- 模型训练
- 昇思推理框架
支持在端侧的小样本、迁移、增量训练,实现个性化AI体验
昇思推理框架(MindSpore)是一个极速、极智、极简的AI引擎,使能全场景智能应用,为用户提供端到端的解决方案,帮助用户使能AI能力。更多信息,请见[MindSpore官网](https://www.mindspore.cn/lite)。MindSpore不仅需要为用户提供基础的训练和推理服务;更重要的是,为了拓展生态,我们需要与广大开发者合作,协助他们贡献他们的代码上库
- 模型推理
- 神经网络运行时
主要完成模型推理工作,即加载模型,完成模型相关的所有计算。推理是通过模型运行输入数据,获取预测的过程
神经网络运行时(Neural Network Runtime)是端侧推理框架和AI芯片之间的重要的桥梁,统一了AI推理的南北向接口,北向Native API为端侧推理框架提供统一的构图、编译、推理接口,南向开放HDI接口,支持广大的硬件厂商将AI芯片通过南向接口接入OpenHarmony,共同建造丰富的OpenHarmony AI南向生态
- 专用AI芯片支持
- AI子系统架构
支持专用AI芯片接入MindSpore Lite。
![figures/ai-framework-overview.png](figures/ai-framework-overview.png)
![figures/ai-framework-arch.png](figures/ai-framework-arch.png)
## 代码仓
| 部件名称 | 部件功能描述 | 部件仓名称 |
| :------------------------------: | :----------------------: | :----------------------------------------------------------------------------: |
| 昇思推理框架<br>(MindSpore Lite) | 提供模型转换和推理的功能 | third_party_mindspore,<br>third_party_flatbuffers|
| 昇思推理框架<br>(MindSpore) | 提供模型转换和推理的功能 | third_party_mindspore,<br>third_party_flatbuffers|
| 神经网络运行时<br>(Neural Network Runtime) | AI专用芯片推理功能 | neural_network_runtime |
- 代码仓地址:
- MindSpore: https://gitee.com/openharmony/third_party_mindspore
- DLLite-micro: https://gitee.com/openharmony-sig/dllite_micro
- FlatBuffers: https://gitee.com/openharmony/third_party_flatbuffers
- OpenCL-Headers: https://gitee.com/openharmony/third_party_opencl-headers
- OpenCL-CLHPP: https://gitee.com/openharmony-sig/third_party_opencl-clhpp
- Neural Network Runtime: https://gitee.com/openharmony-sig/neural_network_runtime
## SIG组成员
......@@ -47,14 +48,16 @@ MindSpore Lite模型转换工具不仅提供了将TensorFlow、TensorFlow Lite
- @ivss(https://gitee.com/ivss)
- @zhanghaibo5(https://gitee.com/zhanghaibo5)
- @silenchen(https://gitee.com/silenchen)
### Committers列表
- @zhaizhiqiang(https://gitee.com/zhaizhiqiang)
- @sunsuodong(https://gitee.com/sunsuodong)
- @zhang_xue_tong(https://gitee.com/zhang_xue_tong)
- @HilbertDavid(https://gitee.com/HilbertDavid)
- @jpc_chenjianping(https://gitee.com/jpc_chenjianping)
- @yangyongjie-boom(https://gitee.com/yangyongjie-boom)
- @jianghui58(https://gitee.com/jianghui58)
### 会议
- 会议时间:双周例会,周一晚上19:00, UTC+8
......
......@@ -356,14 +356,16 @@
"https://gitee.com/openharmony-sig/dllite_micro",
"https://gitee.com/openharmony/third_party_flatbuffers",
"https://gitee.com/openharmony/third_party_opencl-headers",
"https://gitee.com/openharmony-sig/third_party_opencl-clhpp"
"https://gitee.com/openharmony-sig/third_party_opencl-clhpp",
"https://gitee.com/openharmony-sig/neural_network_runtime"
],
"project-path": [
"third_party/mindspore",
"foundation/ai/dllite-micro",
"third_party/flatbuffers",
"third_party/opencl-headers",
"third_party/opencl-clhpp"
"third_party/opencl-clhpp",
"foundation/ai/neural_network_runtime"
]
},
{
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册