提交 02d86fc4 编写于 作者: L lz 提交者: Gitee

Merge branch 'master' of gitee.com:mindspore/community into master

#
| title | authors | owning-sig | participating-sigs | status | creation-date | reviewers | approvers | stage | milestone |
| ------- | -------------------------------- | ---------- | ------------------ | ----------- | ------------- | --------- | --------- | ----- | ------------- |
| MEP-AKG | @anyrenwei  @ckey_dou @dylangeng | akg | | provisional | 2020-06-16 | | TBD | beta | beta : "v0.5" |
......@@ -17,7 +15,7 @@
- [Goals](#goals)
- [Non-Goals](#non-goals)
- [Proposal](#proposal)
- [User Stories](#user-stories-optional)
- [User Stories](#user-stories)
- [Deep Graph Optimization](#deep-graph-optimization)
- [Optimize Dynamic Neural Network](#optimize-dynamic-neural-network)
- [Design Details](#design-details)
......@@ -26,7 +24,7 @@
- [Drawbacks](#drawbacks)
- [Alternatives](#alternatives)
- [References](#references-optional)
<!-- /toc -->
## Summary
......@@ -92,13 +90,13 @@ nitty-gritty.
AKG aims to generate high-performance target code for fusing operators with specific patterns on different hardware backends. So three basic processes should be contained in akg as follows.
- **Operator Expression.**
AKG defines several basic operators which can be used to compose a complicated fused operator. These basic operators have the same granularity with MindSpore's IR. We introduce json to expressed the relation of the basic operators in one fused operator which brings weak dependency between MindSpore and AKG.
- **Schedule initialize based on polyhedral.**
When akg obtained the dsl of operators which would be fused, it would transform the operator dsl into formularIR(now we use HalidIR as tvm) and then into isl schedule tree. Next the polyhedral schedule process begin. With the help of pluto algorithm and other optimizations the schedule tree will do some transformations including vectorization, loop tiling, mem promotion and loop distribution, which can help us to improve the parallel capability and data locality.
- **Emit instructions on different hardware from IR.**
In order to generate correctness and high-performance codes for different hardware, The IR should be optimized respectively, which consists of double buffer optimization, storage rewrite optimization and inject sync optimization.
......@@ -113,15 +111,15 @@ bogged down.
#### Deep Graph Optimization
Since the network is becoming more deeper and larger, there are more opportunity to fused different operation into one to optimiza network performance.
Since the network is becoming more deeper and larger, there are more opportunity to fused different operation into one to optimize network performance.
AKG tools has the ability to auto-generate target code based on composited dsl, without scheduling procedure.
After automatic operator fusion and operator re-composition in graph level, AKG tools can generates high-performance target code for these composited pattern.
#### Optimize Dynamic Neural Network
Networks are exhibiting more and more dynamism, especially in the fields of deep graph analysis and NLP.
Tensors in a model may have dynamic shapes such as batch size, image size, sequence length, etc.
Models are expressed with control-flow, such as recursion, conditionals and loops.
Networks are exhibiting more and more dynamism, especially in the fields of deep graph analysis and NLP.
Tensors in a model may have dynamic shapes such as batch size, image size, sequence length, etc.
Models are expressed with control-flow, such as recursion, conditionals and loops.
Within these different dynamic requirement, AKG can generate one general target code on davinci hardware(different hardware) using for different shape of one common operator.
## Design Details
......@@ -135,12 +133,12 @@ proposal will be implemented, this is the place to discuss them.
<!--![Image text](akg-design.png) {:height="75%" width="75%"} -->
AKG composes with four basic optimization module, normalization, auto schedule, instruction emit and backend optimization.
- **normalization.** The mainly optimization of normalization includes three address transform, common subexpression elimination, copy propagation and so on.
- **auto schedule.** The auto schedule module mainly have vectorization, loop tiling, mem promotion and loop distribution.
- **instruction emit.** The instruction emitting module has the optimization about loop normalization, auto pragma and emit instruction.
AKG composes with four basic optimization module, normalization, auto schedule, instruction emit and backend optimization.
- **normalization.** The mainly optimization of normalization includes three address transform, common subexpression elimination, copy propagation and so on.
- **auto schedule.** The auto schedule module mainly have vectorization, loop tiling, mem promotion and loop distribution.
- **instruction emit.** The instruction emitting module has the optimization about loop normalization, auto pragma and emit instruction.
- **backend optimization.** The backend optimization module consists of double buffer optimization, storage rewrite optimization and inject sync optimization.
<img src="akg-design.png" style="zoom:80%" div align=center/>
When GraphKernel is enabled, ops are reconstructed in the graph level. The new ops described in the format of json will be translated into DSL in AKG and then compiled to the target binary.
......@@ -173,7 +171,7 @@ when drafting this test plan.
AKG employed pytests and nosetest to launch the testing process, and there are three types of testing strategies in AKG:
- **Unit Test.** Every optimization or pass in AKG has its own unitest.
- **Unit Test.** Every optimization or pass in AKG has its own unitest.
- **System test**. The akg module has its own component testing. Basically we classify the testing into compilation verification, function verification and performance testing.
......@@ -204,7 +202,7 @@ Major milestones might include
<!--
Why should this MEP _not_ be implemented?
-->
- The schedule generated directly by pluto algorithm during the polyhedral process would exist some issues on both correctness and performance in some scenarioes. So some extra passes have to added before emitting instructions.
- The schedule generated directly by pluto algorithm during the polyhedral process would exist some issues on both correctness and performance in some scenarios. So some extra passes have to added before emitting instructions.
## Alternatives
......@@ -216,5 +214,5 @@ information to express the idea and why it was not acceptable.
- Both TVM[1] and TC[2] are outstanding tools which can automatically synthesize high-performance machine learning kernel. However, neither of them could generate codes for Davinci cores(cce codes) as davinci cores have more complicated multi-level memory design(L0-A/B/C, L1 and UB) as well as specific dataflow constraint. Besides, TVM adopted schedule space model and had to write the schedule all by ourselves while akg used polyhedral techniques to initialize the schedule automatically, which referenced from the designing of TC.
## References
- [1] https://github.com/apache/incubator-tvm
- [1] https://github.com/apache/incubator-tvm
- [2] https://github.com/facebookresearch/TensorComprehensions
......@@ -3,20 +3,20 @@
| ------- | -------------------------------- | ---------- | ------------------ | ----------- | ------------- | --------- | --------- | ----- | ------------- |
| MEP-mslite | @zhengli  @zhiqiangzhai @chaijun | mslite | | provisional | 2020-08-18 | | TBD | beta | beta : "v0.7" |
# MEP-mslite: MindSpore Lite
# MEP-MSLITE: MindSpore Lite
## Table of Contents
<!-- toc -->
- [MEP-mslite: MindSpore Lite](#mep-mindspore-lite)
- [MEP-MSLITE: MindSpore Lite](#mep-mslite-mindspore-lite)
- [Table of Contents](#table-of-contents)
- [Summary](#summary)
- [Motivation](#motivation)
- [Goals](#goals)
- [Non-Goals](#non-goals)
- [Proposal](#proposal)
- [User Stories](#user-stories-optional)
- [User Stories](#user-stories)
- [Generate a compact target model and low latency and low consumption runtime](#generate-a-compact-target-model-and-low-latency-and-low-consumption-runtime)
- [Design Details](#design-details)
- [Test Plan](#test-plan)
......@@ -24,7 +24,7 @@
- [Drawbacks](#drawbacks)
- [Alternatives](#alternatives)
- [References](#references-optional)
<!-- /toc -->
## Summary
MindSpore(MS) Lite is an extremely light-weight deep learning inference framework,
......@@ -32,11 +32,8 @@ and designed for smart-phones and embedded devices, such as watches, headsets, a
It supports Android and iOS, as well as Harmony os, and has industry leading performance.
## Motivation
Since increased computing power and sensor data, intelligence is moving towards edge devices.
Improved AI algorithms are driving the trend towards machine learning be run on
the end device, such as smart-phones or automobiles, rather than in the cloud.
On-device AI can dramatically reduce latency, conserve bandwidth,
improve privacy and enable smarter applications.
Since increased computing power and sensor data, intelligence is moving towards edge devices. Improved AI algorithms are driving the trend towards machine learning be run on the end device, such as smart-phones or automobiles, rather than in the cloud.
On-device AI can dramatically reduce latency, conserve bandwidth, improve privacy and enable smarter applications.
### Goals
- Compatibility: supports MindSpore model, as well as mainstream third-party models, such as TensorFlow Lite, Caffe 1.0 and ONNX.
......@@ -61,9 +58,9 @@ while Lite Micro is for extremely resource limited devices, such as watches, hea
- Compatibility
provides an abundant of operator parsers for MindSpore, Tensorflow Lite, Caffe, ONNX,
provides an abundant of operator parsers for MindSpore, TensorFlow Lite, Caffe, ONNX,
and supports common neural networks in CV and NLP, 208+ CPU operators, and 60+ GPU operators.
- High performance
Many optimization methods, including graph optimizations, post training quantization,
......@@ -76,10 +73,10 @@ algorithm in convolution and deconvolution, Strassen algorithm in matrix multipl
Operations support fp64, fp32, fp16 and int8, and are highly optimized with acceleration by
neon instructions, hand-written assemble, multi-thread, memory reuse, heterogeneous computing, etc.
- Versatility
- Versatility
Supports Harmony, iOS and Android os, supports smart-phones, watches, headsets, and various IoT devices.
- Light weight
MS Lite is highly Optimized under GHLO and GLLO. It has small foot-print,
......@@ -99,18 +96,12 @@ MS Lite consists of converter and runtime.
The converter is an offline tool has three parts, frontend, IR, and backend.
Runtime deploys to device and executes online.
- **Frontend.** Frontend aims to parse model from MindSpore, Tensorflow Lite, Caffe and ONNX in protobuf.
- **Frontend.** Frontend aims to parse model from MindSpore, TensorFlow Lite, Caffe and ONNX in protobuf.
- **IR.** IR is to define ANF, including tensor, operations, and graph.
- **Backend.** Backend is an optimizer based ANF graph, including GHLO, GLLO, and quantization.
GHLO is short for "graph high level optimization", common optimization methods,
such as operators fusion, operator substitution, and constant folding, are included.
GLLO is short for "graph low level optimization", low level optimization methods
are related to hardware, such as layout adjustment, mixed-precision, etc.
- **Backend.** Backend is an optimizer based ANF graph, including GHLO, GLLO, and quantization. `GHLO` is short for "graph high level optimization", common optimization methods, such as operators fusion, operator substitution, and constant folding, are included. `GLLO` is short for "graph low level optimization", low level optimization methods are related to hardware, such as layout adjustment, mixed-precision, etc.
- **Runtime.** Runtime has Lite RT and Lite Micro two modes.
<img src="./ms-lite-arch.jpg" style="zoom:80%" div align=center/>
<img src="./ms-lite-arch.jpg" style="zoom:80%" div align=center/>
### Test Plan
......@@ -126,7 +117,7 @@ function verification and performance testing.
## Implementation History
- Support high and low level graph optimization.
- Support post training quantization.
- Support Arm CPU and Mali GPU.
- Support Arm CPU and Mali GPU.
- Support fp64, fp32, fp16, int8 operations.
## Drawbacks
......@@ -139,6 +130,6 @@ both of them are in scope of Huawei's MindSpore AI framework.
They share same IR, and optimization passes. MS Lite is more flexible.
## References
- [1] https://github.com/alibaba/MNN
- [1] https://github.com/alibaba/MNN
- [2] https://www.tensorflow.org/lite
- [3] https://github.com/Tencent/TNN
- [3] https://github.com/Tencent/TNN
## MindSpore community meeting
Coming soon...
......@@ -14,27 +14,9 @@ To build a more secure AI framework, we sincerely invite you to join us.
If you find a suspected security issue, use [Suspected Security Issue Reporting Template](https://gitee.com/mindspore/community/blob/master/security/template/report-template_en.md) to report it so that the community vulnerability management team (VMT) is able to confirm and fix the issue as soon as possible with sufficient details. Your email will be confirmed within one working day. Within seven days, we will provide more detailed replies to your suspected security issues and provide the next-step handling policy.
To ensure security, please use the PGP public key to encrypt your email before sending it.
To ensure security, please use the [PGP public key](https://gitee.com/mindspore/community/blob/master/security/public_key_securities.asc) to encrypt your email before sending it.
+ Security email address: <mindspore-security@mindspore.cn>
+ PGP public key:
```
-----BEGIN PGP PUBLIC KEY BLOCK-----
iQG2BCABCgAgFiEEwUbNw8zaTIe27U8lt42TVbzPfREFAl58v18CHQAACgkQt42T
VbzPfRGVswwAnSIi1fE0CzIkxPrhfcnfF+vx5y+qpk6ssFr5iFuepBSbA+ZGhaDn
ULYOkBMnGfrgzjw8OzMK7vKIgR2ymmuTJt9qpFH4OIXRX1OXoMYnkPxrQJFpNZpP
BvnxmEey0VOvz9Y3Fa4mHMjvA3I2pbSlH+T2wkGQRO5zhKN7NhQfRFgyFNQT2l5m
pPBdm+sAs5ty6eQuSZF1wECIW17WB53o171DTNbAPySEfOLvq0orNAJWjT4sR1jn
9M20t3DpjC5dZuMCUuZTbCgHkaLOo0ZkwMXV+dPkm/4hMWLVPxRvlkH02PI++KBl
N8cW+TZb1YN/va9Nrjh+Ah50Px2nmQ/fk60VHKj5hTb8U+PSPGlvWUALwb6ckm55
nUcBvFiDpe7uAtX88sv2kBR6gIbr0pW9JwOnBLjxGoM3lgfrIot1qFWdBGJrRnIo
bgMtm0PEcwRfHefJY//4BiDgg2ef9DIX7VSSb6rV0HJpNz0IAxyzG41BdSG+3dSb
ns0y2L0F2M+N
=HPa4
-----END PGP PUBLIC KEY BLOCK-----
```
## MindSpore Community Security Issue Disclosure Process
......
......@@ -14,27 +14,10 @@ MindSpore作为一个同时支持端/边缘/云场景的训练推理框架,在
如果您发现了疑似安全问题,请您使用[疑似安全问题上报模板](https://gitee.com/mindspore/community/blob/master/security/template/report-template_zh_cn.md)进行反馈,以便社区漏洞管理团队在能够获得足够详细信息的条件下,尽快确认并修复问题。您的邮件将在1个工作日内得到确认,在7天内对您反馈的疑似安全问题提供更详细的回复,并给出下一步的处理策略。
鉴于安全问题的敏感性,请使用PGP公钥加密后发送。
鉴于安全问题的敏感性,请使用[PGP公钥](https://gitee.com/mindspore/community/blob/master/security/public_key_securities.asc)加密后发送。
+ 安全邮箱:<mindspore-security@mindspore.cn>
+ PGP公钥:
```
-----BEGIN PGP PUBLIC KEY BLOCK-----
iQG2BCABCgAgFiEEwUbNw8zaTIe27U8lt42TVbzPfREFAl58v18CHQAACgkQt42T
VbzPfRGVswwAnSIi1fE0CzIkxPrhfcnfF+vx5y+qpk6ssFr5iFuepBSbA+ZGhaDn
ULYOkBMnGfrgzjw8OzMK7vKIgR2ymmuTJt9qpFH4OIXRX1OXoMYnkPxrQJFpNZpP
BvnxmEey0VOvz9Y3Fa4mHMjvA3I2pbSlH+T2wkGQRO5zhKN7NhQfRFgyFNQT2l5m
pPBdm+sAs5ty6eQuSZF1wECIW17WB53o171DTNbAPySEfOLvq0orNAJWjT4sR1jn
9M20t3DpjC5dZuMCUuZTbCgHkaLOo0ZkwMXV+dPkm/4hMWLVPxRvlkH02PI++KBl
N8cW+TZb1YN/va9Nrjh+Ah50Px2nmQ/fk60VHKj5hTb8U+PSPGlvWUALwb6ckm55
nUcBvFiDpe7uAtX88sv2kBR6gIbr0pW9JwOnBLjxGoM3lgfrIot1qFWdBGJrRnIo
bgMtm0PEcwRfHefJY//4BiDgg2ef9DIX7VSSb6rV0HJpNz0IAxyzG41BdSG+3dSb
ns0y2L0F2M+N
=HPa4
-----END PGP PUBLIC KEY BLOCK-----
```
## MindSpore社区安全问题披露流程
......
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: Keybase OpenPGP v1.0.0
Comment: https://keybase.io/crypto
xsBNBF8wxmcBCADQgjuS6JSvD5rbOstF/pkPahLpVubGYFJbrPLnCJywmZ0fq8fv
UUcOJSM5wdPEq7LHwAq1pU0Khf3w0We+ld0zwCs7RtQOY9TddaovQxhrxQG+SJXM
+S9HE4YX1ktXPuRk/AGByk3jwYa7W63IvAoGRqPoUGwO0YkQjNU3S5RUmVB2exk6
P3qY7hbjc68TOiX+J0vCy4f0uWKIzWjTIt9JkODtJv2ssaWwu192ZDJLb0JkWinU
LaKKRNorPS5Dy1jIsbVj7y9Qf4TKGMj7WI+9di3w0D/Aiij6Voh766l9E/PLUZ3W
/xcfRTvDnohoIYqcYnIjDcl97NRX3YOM1iA3ABEBAAHNNG1pbmRzcG9yZS1zZWN1
cml0eSA8bWluZHNwb3JlLXNlY3VyaXR5QG1pbmRzcG9yZS5jbj7CwG0EEwEKABcF
Al8wxmcCGy8DCwkHAxUKCAIeAQIXgAAKCRDonEDWTnblPk4sCACRdC01UAUAstRi
egZbaSYGhTBfJGvh23kjTGJnoxnK+7TxGp7Cm1w9rn3Y0gLK9mCyksYSSkd+FK6Y
r7A3x3JEmL54D/BTJj6comTYLZP8u0C2S/ifivILSxwZ0xNmf9HMyTWqvXaD2wTT
pPCuKBQHgKU4twI6/tsdGwqZRn0E3vddz5SwZ8enXS2QbijUDRqKaljQkj6ZWrOL
YFFff7J5BusEfPIX54imGiV2EFIhvm53mYK2zl7L60QcW20HauGaY0IzQUxVGl1E
9CRc7/duyVEOJWWwp0IMXDHbOBCr+ViUqubIY0SBSvXqpy2Z0dUQkYK3Z54J0oHA
sQ0e8e4LzsBNBF8wxmcBCADJ8gP8cUxMPGIZwbPmsyZHcba2C99tfT1qwCfuMIZS
KuOzUQfH6nilXhi5WlCpGGVypdCQLSl8wU24OQPmKv1D5y0r2h1DI2Ipya3THn7r
CTP07MgqOzbdHPnqWgYOVH376FpgXqcxG1/GlicKnInTFuWt7iSlFDD9eX3JLHRa
CDFm4YEwpO3HAsXzuP9wxRpEccO7Q68x2dzflfbV0TDl3f8GU6fdNHK8xSOixRyY
4oq3z7hP3bnFC1yBs2UV6Px/BUseLtmvWGl+3xL5zvLyzWyWcPdqN4CcuI/7LXk+
mAHeLBSNiV59Tjyv7KMqKppBu5vEWpeeavSNORvmwJOBABEBAAHCwYQEGAEKAA8F
Al8wxmcFCQ8JnAACGy4BKQkQ6JxA1k525T7AXSAEGQEKAAYFAl8wxmcACgkQMBVh
JvIIuQuOrQf/bO5H88GnJ+mZz/R07S9IANcS+UvnDYkEVhbIMfdJN0uUOF7PwL2K
MjsnCPc9WgKc3Vf12x28+tpqSJtM1Zk9EDvhaqiu9vOpAHpzSVAsJpjd2M8InZwc
1XXqXC44AvEYj49QW63Wh8pu8RFAK6DY2FTOF4qTXQkV2lu0ocE2KCJkcC4KfwLf
27pyBHpb5yeP6bUrYYdduhzAZQxD313rd+YfZqycMlZafjqSQifTGpgpjh7fQ3jS
TtvDwLTmXdzzW72IaYgrOir6jFeuBB2gpNSV71uYReLLxiJ+1ngNhAbGuDp3k3Ix
inCF1dzkkGEa8Uk7MAiP9L80k2gaSzPRryhWB/0ZGLi3/KegKGlIGdlP1UwAsxgS
pkbgYnb+q25jDeoKWMRgTFB+ZurqxXqPtQp9cznQ5fXNldnE/EIC363jr4rgUlfR
V+ouAr3/yKK/2loLIUvmIdnBEIYJl+gRQrM94mAKpJTr8mEZlzoO7ChY5s99XJEL
UgAs/Q+k2ISp080qzLTCYmfmXaAdvOKdaphLhHJPmf0bAS+IX2TI6PjQetfkumae
PWjahmA6cAqQDy4/fFWMTFIvzdQvPICPdHEklKvmLmIuSN8ciYY9GJTygnW7HJcA
laH6RG45EyWrTQRAuIgrVl8PdILuaAjdmEWRdOITnxj6IrB5Ggr3RQnuLuUqzsBN
BF8wxmcBCADZD+WhuNgEd7CIuNXO6dd3TJuBEMBdpmrxUCuh/KEz3BiiE4wMcv3q
wwpd0EpUDuORq/wTyrJnBOq82mdQMbDSPoP4WBmGGVUvf84IiDU5m2ZgD6kq1Aur
dCZsuBWAWSLyPIY1Kqk5VNId46sZwDhc5ueXobe6V0pr1IlRgYdPYo52OCXLVSWy
NCt1NPD00ln74m2JcodnCax3IpFjbaBQylxkFuzTNUxxyL2N2ZQHQjnuOakyG/zm
MT/otKHLytPNvfsAvSpNZ+RQZMAYUDR7YoXC5qjY2zNIGw6nO2zOwmQ6q/QJdgeb
Q5Iy4bWclAeYvId+RjVKU6ZP79hw+et7ABEBAAHCwYQEGAEKAA8FAl8wxmcFCQ8J
nAACGy4BKQkQ6JxA1k525T7AXSAEGQEKAAYFAl8wxmcACgkQnk09g1xm9S7DDQgA
orf4j7UcZRbhaRSeqY/u9ExN8a6DTf2GOru5ru0xvnfBOLmfqnfGz0oN7lun6hMg
sRKKQHFJc2950w40ewsPTKOqRFdwT2nzZ/RM5hQWGOgE2MOo4rmlq0caZ4nwPeva
+JgUW9LhGAd0h8iRWUr03Cjzy7WLrIl3W6yDeQ516HnAeEShz5tw1hse257EW5tE
suYPIU4L1b/W6VxI9jB5wpnMP0IKzu4+TL4eCXTCNS6PTLjmdex/1P3pWymLTAw2
ZkR4U0yCjckYhbXhRDcwdD1glkRpE/oUjC5SuqV9WqUs8py94JCpPSNsGb2kplKk
IM5oNvOUlkOHEojcgAMedh9HB/sHqEYvMIgQuiTEluCNr+xww/7oAMUcYD38XwrF
eKob87W//x6bMu2XygM0zXfpM0V5xYIG6VDLg0gythzxcC+JOUmKmDhFLGWsvs3f
plA1BUXEJALxeZMbtTta0kr0pw0nIKN+zazaJmGwkREs2df+XDfuyxWxNt17H46p
RvsDafIw+E6nl+MfR/ZlcCmrtm6JZSzKQufph/+xVgLHRlMOKKs+0151EDSzGEra
nDCvnhLfFha+ro4xl70QgKu6hlRu/0oxCfx/jh8/kKOXeuOwfuTsBWXRysfxqY57
b82gD5e2OFeB+F2PnIPIpst5iyT9I12NeTSUeEml5b/gw4JH
=UBbA
-----END PGP PUBLIC KEY BLOCK-----
......@@ -2,7 +2,7 @@
This is the working repo for the MDP Special Interest Group (SIG). MindSpore Deep Probabilistic Programming (MDP) is a programming library for Bayesian deep learning. The target of MDP is to intergrade the gap between deep learning and Bayesian learning. This repo contains all the artifacts, materials, meeting notes and proposals regarding **Probabilistic Programming** , **Deep Probabilistic Programming** , **Toolbox** . Feedbacks and contributions are welcomed.
1. Probabilistic Programming: Probabilistic Programming (PP) focuses on professional Bayesian learning, incluiding statistical distributions classes used to generate stochastic tensors and probabilistic inference algorithms.
2. Deep Probabilistic Programming: Deep Probabilistic Programming (DPP) aims to provide composable BNN modules, which contains bnn layers, bnn, transforms and context.
2. Deep Probabilistic Programming: Deep Probabilistic Programming (DPP) aims to provide composable BNN modules, which contains bnn layers, bnn modules, transforms and context.
3. Toolbox: Toolbox provides a set of BNN tools for some specific applications, sunch as Uncertainty Estimation, OoD Detection and so on.
## SIG Leads
Chen Jianfei (Tsinghua University)
......@@ -13,4 +13,4 @@ Chen Jianfei (Tsinghua University)
## Discussion
- Slack channel: https://app.slack.com/client/T018BLCMSGL/learning-slack
- Documents and artifacts: https://gitee.com/mindspore/community/tree/master/sigs/mdp
## Meeting notes
\ No newline at end of file
## Meeting notes
# MindSpore ModelZoo Special Interest Group (SIG)
This is the working repo for the ModelZoo special interest group (SIG). This repo contains all the artifacts, materials, meeting notes and proposals regarding **state-of-the-art deep learning models** and **implementations** in MindSpore. Feedbacks and contributions are welcome.
1. **State-of-the-Art Deep Learning Models**: It covers typical deep learning models in image classification, object detection and segmentation, and natural language processing. These models are intended to be well-maintained, tested and kept up to date with the latest Mindspore API.
1. **State-of-the-Art Deep Learning Models**: It covers typical deep learning models in image classification, object detection and segmentation, and natural language processing. These models are intended to be well-maintained, tested and kept up to date with the latest Mindspore API.
2. **Implementations**: It provides a collection of example implementations for the models powered by Mindspore high-level APIs. Before implementing the model, make sure that the operations used in the model architecture and data processing pipeline are supported in Mindspore. Users can choose the related model to perform end-to-end training and do evaluation on new dataset.
# SIG Leads
......@@ -22,4 +22,3 @@ This is the working repo for the ModelZoo special interest group (SIG). This rep
# Meeting notes
* [Saturday May 16, 2020](./meetings/001-20200516.md)
# MindSpore Lite Special Interest Group (SIG)
This is the working repo for the mslite Special Interest Group (SIG). This repo contains all the artifacts, materials, meeting notes and proposals regarding **MS Lite Converter** , **MS Lite Runtime**. Feedbacks and contributions are welcomed.
1. **Converter**: converter is an offline tool has three parts, frontend, IR, and backend, aims to generate a compact model with applying graph optimizations and post training quantization.
2. **Runtime**: runtime deploys to device and executes online, has Lite RT and Lite Micro two modes.
1. **Converter**: converter is an offline tool has three parts, frontend, IR, and backend, aims to generate a compact model with applying graph optimizations and post training quantization.
2. **Runtime**: runtime deploys to device and executes online, has Lite RT and Lite Micro two modes.
# SIG Leads
......@@ -20,5 +20,3 @@ This is the working repo for the mslite Special Interest Group (SIG). This repo
* Documents and artifacts: https://gitee.com/mindspore/community/tree/master/sigs/mslite
# Meeting notes
......@@ -4,7 +4,7 @@
## Conference links
## Attendees
## Attendees
* Tom (Huawei)
## Notes
......
# MindSpore Security Special Interest Group (SIG)
This is the working repo for the MindArmour special interest group (SIG). This repo contains all the artifacts, materials, meeting notes and proposals regarding **model security** and **Data privacy protection** in MindSpore. Feedbacks and contributions are welcome.
1. **model security**: The model security contains four features: attack, detect, defense and evaluate.
1. **model security**: The model security contains four features: attack, detect, defense and evaluate.
2. **Data privacy protection**: We will implemented this feature very soon.
# SIG Leads
......@@ -22,3 +22,4 @@ This is the working repo for the MindArmour special interest group (SIG). This r
# Meeting notes
* [Thursday June 04, 2020](./meetings/001-20200604.md)
* [Friday July 03, 2020](./meetings/002-20200703.md)
* [Saturday August 08, 2020](./meetings/003-20200808.md)
# Saturday August 8, 2020 at 2:30pm GMT+8
## Agenda
- Refactor AI Fuzzer module.
- Add new feature: model information reverse analysis technique - member inference attack.
- Support graph mode of DpOptimizer.
- Support broadcast ability of Lapalace random operation.
## Conference links
- https://imeeting.huawei.com/meeting/joinzoom?id=280361&app=welink
- Meeting ID:280361
- Please install Zoom before the meeting.
## Attendees
* Wang Ze (Huawei)
* Lv Zhangcheng (Huawei)
* Liu Liu (Huawei)
* Liu Zhidan (Huawei)
* Yang Yuan (Huawei)
* Zheng Huanhuan (Huawei)
* Jin Xiulang (Huawei)
* Li Peng (Huawei)
* Li Yanjun (Huawei), etc
## Notes
* Participants: Wang Ze, Liu Liu, Liu Zhidan, Yang Yuan, Zheng Huanhuan, Jin Xiulang, Li Peng, Li Yanjun, etc.
* The meeting video can be found:
*Post link after meeting*.
## Action items
* None.
# Topic10:AutoML
## Motivation:
​ Nowadays, training a model that meets the accuracy requirements often requires rich expert knowledge and repeated iterative attempts. Although there is AutoML technology, there are still problems of difficult search space setting and long training time for large search spaces. If you can combine the iterative history of user training and analyze historical data, a lightweight hyperparameter recommendation method can be realized, which can greatly improve the developer experience.
​ Similarly, for performance tuning, there are similar problems. In different heterogeneous hardware, models, and data processing scenarios, expert knowledge is required for tuning. Therefore, we aim to reduce the performance tuning threshold by automatically identifying system performance bottlenecks and recommending the best code path.
## Target:
​ Reduce model development cost, set up thresholds through automatic hyper-parameter configuration and performance optimization paths, and improve model debugging and optimization efficiency.
## Method:
​ We expect the applicant can conduct AutoML research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support.
## How To Join:
1. Submit an issue/PR based on community discussion for consultation or claim on related topics
2. Submit your proposal to us by email xxx@huawei.com
\ No newline at end of file
# Topic1: Low-bit Neural Networks Training
## Motivation:
​ At present, mixed precision can automatically adjust the accuracy of fp16 and fp32 for the network to improve training performance and memory optimization. Because operators have different costs on different AI chips, all optimization strategies for different AI chips are different. The network configuration of different hardware is different, so how to automatically generate the precision adjustment strategy that adapts to various hardware, especially the low bit strategy has become a difficult problem.
## Target:
​ Self-adaptively provides a low-bit precision training mechanism for various networks.
![target](target.PNG)
## Method:
​ We expect the applicant can conduct low-bit neural networks training research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support.
## How To Join
1. Submit an issue/PR based on community discussion for consultation or claim on related topics
2. Submit your proposal to us by email xxx@huawei.com
\ No newline at end of file
# Topic2: Memory Optimization
## Motivation:
​ There are many strategies for memory optimization, such as recalculation and host-device memory switching. These strategies further break through the memory bottleneck by increasing the amount of calculation and increase the batchsize. Increasing batchsize can often improve the utilization of GPU and NPU to improve throughput performance.
## Target:
* Adaptive search memory optimization strategy to optimize the overall network performance.
* Or provide a methodological strategy.
![memor_opt](memor_opt.PNG)
## Method:
​ We expect the applicant can conduct memory optimization research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support.
## How To Join
1. Submit an issue/PR based on community discussion for consultation or claim on related topics
2. Submit your proposal to us by email xxx@huawei.com
\ No newline at end of file
# Topic3:Model Innovation
## Motivation:
1. In-depth probability model innovation: through the combination of neural network and probability model, the model can better help decision-making.
2. Graph neural network: The neural network is combined with the traditional graph structure, oriented to cognitive reasoning and future trends.
3. Model innovation combining traditional models and neural networks is a research hotspot.
## Target:
- Complete probability sampling library and probability inference (learning the probability distribution of the overall sample through known samples) algorithm library
- Design new algorithms for dynamically changing heterogeneous graphs (different feature dimensions and different information aggregation methods)
- Trillion distributed graph data storage, segmentation and sampling
## Method:
​ We expect the applicant can conduct model innovation research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support.
## How To Join:
1. Submit an issue/PR based on community discussion for consultation or claim on related topics
2. Submit your proposal to us by email xxx@huawei.com
\ No newline at end of file
# Topic4:AI for Scientific Computing
## Motivation:
* AI modeling :AI automatic modeling can effectively improve modeling efficiency, and convergence analysis can improve model reliability and ensure simple and safe use by users.
* AI solution:The calculation amount of high-order differential increases exponentially with the parameter and the order. We can design neural network models to solve such classic problems.
## Target:
* AI modeling:Construct a neural network, training data and Loss function for scientific computing problems.
* AI solution:AI model solves differential equations, solves optimization problems, achieve the goal that the amount of high-order automatic differential calculation increases linearly with the order.
## Method:
​ We expect the applicant can conduct AI for scientific computing research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support.
## How To Join:
1. Submit an issue/PR based on community discussion for consultation or claim on related topics
2. Submit your proposal to us by email xxx@huawei.com
\ No newline at end of file
# Topic5: Verifiable Trustworthy AI
## Motivation:
- Many aspects of trustworthy AI (or responsible AI), such as robustness, backdoor-free, fairness, privacy protection capabilities, and accountability, have gradually attracted the attention of the industry and academia.
- Scholars' understanding and research on the attributes of trustworthy AI are mostly empirical, and there are few theoretical studies. The verifiable and certifiable analysis, tuning, and evaluation methods of trustworthy AI attributes and bounds, and their relation to explainable AI, require theoretical guidance.
## Target:
​ Propose verifiable and certifiable research mechanism and evaluation system on trustworthy AI.
## Method:
​ We expect the applicant can conduct Verifiable Trustworthy AI research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support.
## How To Join
1. Submit an issue/PR based on community discussion for consultation or claim on related topics
2. Submit your proposal to us by email xxx@huawei.com
# Topic6: Confidential AI Computing
## Motivation:
- In the training and deployment process of AI services, several vital resources such as data, models, and computing resources may belong to different parties, so a large amount of data will move across trust domains. The problems of data privacy protection and model confidentiality protection are prominent.
- Confidential computing is an important direction to protect the confidentiality of key data. At present, confidential computing based on trusted execution environment has performance advantages, but its trust model is limited; the trust model of confidential computing based on cryptography (homomorphic encryption, multi-party computing) is simple, but there is still a gap between performance and practicality.
- A series of specialized optimizations may improve the performance of confidential computing in AI scenarios, including but not limited to: cryptography suitable for AI, specialized intermediate representation and compling strategy for confidential AI computing, and hardware-based acceleration.
## Target:
​ Realize an AI on Encrypted Data & Model computing framework with feasible, flexible and efficient performance in actual AI application scenarios, or key technologies.
## Method:
​ We expect the applicant can conduct Confidential AI Computing research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support.
## How To Join
1. Submit an issue/PR based on community discussion for consultation or claim on related topics
2. Submit your proposal to us by email xxx@huawei.com
\ No newline at end of file
# Topic7: Tensor Differentiable Calculation Framework
## Motivation:
* The new network model poses challenges to the IR expression, optimization and execution of the deep learning framework, including the introduction of a new op abstraction level, and the dynamics of the model.
* Third-party high-performance computing languages or frameworks are accelerated, and there is an urgent need for a more versatile and open tensor computing framework and API design
* The technical challenges of unified optimization of the model layer and the operator layer, including hierarchical IR design, optimization of infrastructure, automatic tuning, loop optimization, etc.
* Differential equations are solved with a large number of differentials, which have high requirements for the differential expression of the framework, interface design, algorithm analysis efficiency and reliability.
##Target:
​ Driven by cutting-edge applications, from the perspectives of new models, dynamic models, high-performance computing languages, etc., study the evolution direction and key technology paths of future computing frameworks. For example, it supports differentiable programming of high-order differentiation and is compatible with traditional Fortran/C numerical calculation framework.
##Method:
​ We expect the applicant can conduct tensor differentiable calculation framework research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support.
## How To Join
1. Submit an issue/PR based on community discussion for consultation or claim on related topics
2. Submit your proposal to us by email xxx@huawei.com
\ No newline at end of file
# Topic8: Distributed And Parallel AI Computing Framework
## Motivation:
* The scale and complexity of models are getting higher and higher, such as GPT-3 with 175 billion parameters, millions of face recognition, and tens of billions of feature recommendations.
* It is difficult to split the model manually. For example, developers need to combine information such as calculation amount, cluster size, communication bandwidth, and network topology to construct a parallel mode.
* The expression of the parallel mode lacks adaptability, and the simple graph-level model segmentation cannot obtain high-efficiency speedup. It requires the decoupling of algorithm logic and parallel logic.
## Target:
​ Driven by super-large models, research key technologies for accelerating distributed training, including but not limited to automatic parallelism, hybrid parallelism, memory optimization, and elastic scaling. Such as achieving heterogeneous automatic parallel efficiency and linear speedup.
## Method:
​ We expect the applicant can conduct distributed and parallel AI computing framework research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support.
## How To Join
1. Submit an issue/PR based on community discussion for consultation or claim on related topics
2. Submit your proposal to us by email xxx@huawei.com
\ No newline at end of file
# Topic9:Explainable AI
##Motivation:
​ The current deep learning model is essentially black box due to its technical complexity , which leads to the opacity and inexplicability of AI services and further restricts the commercial application and promotion of AI services. Existing interpretable AI technology mainly focuses on how to provide limited engineering auxiliary information to the model, but ignores the understanding of AI models from the perspective of human cognition
​ Humans usually understand things through analogies, metaphors, induction and other cognitive methods, and have a certain process of mental cognition construction. Thus, in this project, we expect to be able to explore more systematic and interpretable AI methods that conform to human cognition, including interactive interfaces, interpretation methods, measurement methods, and so on.
##Target:
​ A complete set of interpretable AI methods and strategies in line with human cognition, providing necessary interactive cognitive interface design solutions for different scenarios and different cognitions, and a case study for typical scenarios.
## Method:
​ We expect the applicant can conduct XAI research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support.
## How To Join:
1. Submit an issue/PR based on community discussion for consultation or claim on related topics
2. Submit your proposal to us by email xxx@huawei.com
\ No newline at end of file
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册