提交 c3b9f007 编写于 作者: M mindspore-ci-bot 提交者: Gitee

!58 【轻量级 PR】:Incorporate new content modification

Merge pull request !58 from godbaiqi/N/A
# Topic10:AutoML
# Topic10:Automatic model optimization recommendation
## Motivation:
​ Nowadays, training a model that meets the accuracy requirements often requires rich expert knowledge and repeated iterative attempts. Although there is AutoML technology, there are still problems of difficult search space setting and long training time for large search spaces. If you can combine the iterative history of user training and analyze historical data, a lightweight hyperparameter recommendation method can be realized, which can greatly improve the developer experience.
​ Similarly, for performance tuning, there are similar problems. In different heterogeneous hardware, models, and data processing scenarios, expert knowledge is required for tuning. Therefore, we aim to reduce the performance tuning threshold by automatically identifying system performance bottlenecks and recommending the best code path.
* Nowadays, training a high accuracy and high performance model often requires rich expert knowledge and repeated iterative attempts. AutoML makes it easier to apply and reduce the demand for experienced human experts, however, there are still some difficulties in setting search space which lead to large search spaces and long training time. If we can combine the iterative history of user training and analyze historical training data, a lite hyper-parameter recommendation method can be realized, which can greatly improve the developer experience.
* Meanwhile, there are similar problems for model performance tuning, In different heterogeneous hardware, models, and data processing scenarios, expert knowledge is also required. Therefore, we aim to reduce the performance tuning threshold by automatically identifying system performance bottlenecks and recommending the best code path.
## Target:
​ Reduce model development cost, set up thresholds through automatic hyper-parameter configuration and performance optimization paths, and improve model debugging and optimization efficiency.
This feature automatically recommends optimized hyper-parameter configurations and performance optimization paths, reducing the threshold for model development and use and improving the model debugging and optimization efficiency.
## Method:
​ We expect the applicant can conduct AutoML research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support.
​We expect the applicant can conduct Automatic model optimization recommendation research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support.
## How To Join:
1. Submit an issue/PR based on community discussion for consultation or claim on related topics
2. Submit your proposal to us by email xxx@huawei.com
\ No newline at end of file
* Submit an issue/PR based on community discussion for consultation or claim on related topics
* Submit your proposal to us by email roc.wangyunpeng@huawei.com
\ No newline at end of file
# Topic1: Low-bit Neural Networks Training
## Motivation:
​ At present, mixed precision can automatically adjust the accuracy of fp16 and fp32 for the network to improve training performance and memory optimization. Because operators have different costs on different AI chips, all optimization strategies for different AI chips are different. The network configuration of different hardware is different, so how to automatically generate the precision adjustment strategy that adapts to various hardware, especially the low bit strategy has become a difficult problem.
​At present, mixed precision can automatically adjust the accuracy of fp16 and fp32 for the network to improve training performance and memory optimization. Because operators have different costs on different AI chips, all optimization strategies for different AI chips are different. The network configuration of different hardware is different, so how to automatically generate the precision adjustment strategy that adapts to various hardware, especially the low bit strategy has become a difficult problem.
## Target:
​ Self-adaptively provides a low-bit precision training mechanism for various networks.
​Self-adaptively provides a low-bit precision training mechanism for various networks.
![target](target.PNG)
## Method:
​We expect the applicant can conduct low-bit neural networks training research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support.
​ We expect the applicant can conduct low-bit neural networks training research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support.
## How To Join
1. Submit an issue/PR based on community discussion for consultation or claim on related topics
2. Submit your proposal to us by email xxx@huawei.com
\ No newline at end of file
## How To Join:
* Submit an issue/PR based on community discussion for consultation or claim on related topics
* Submit your proposal to us by email baochong@huawei.com
\ No newline at end of file
# Topic2: Memory Optimization
## Motivation:
​ There are many strategies for memory optimization, such as recalculation and host-device memory switching. These strategies further break through the memory bottleneck by increasing the amount of calculation and increase the batchsize. Increasing batchsize can often improve the utilization of GPU and NPU to improve throughput performance.
​There are many strategies for memory optimization, such as recalculation and host-device memory switching. These strategies further break through the memory bottleneck by increasing the amount of calculation and increase the batchsize. Increasing batchsize can often improve the utilization of GPU and NPU to improve throughput performance.
## Target:
* Adaptive search memory optimization strategy to optimize the overall network performance.
* Or provide a methodological strategy.
Adaptive search memory optimization strategy to find the best balance between recalculation overhead and memory optimization benefits, so as to optimize the overall network performance.
![memor_opt](memor_opt.PNG)
![memor_opt](memor_opt.PNG)
## Method:
​We expect the applicant can conduct memory optimization research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support.
​ We expect the applicant can conduct memory optimization research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support.
## How To Join
1. Submit an issue/PR based on community discussion for consultation or claim on related topics
2. Submit your proposal to us by email xxx@huawei.com
\ No newline at end of file
## How To Join:
* Submit an issue/PR based on community discussion for consultation or claim on related topics
* Submit your proposal to us by email baochong@huawei.com
\ No newline at end of file
# Topic3:Model Innovation
## Motivation:
1. In-depth probability model innovation: through the combination of neural network and probability model, the model can better help decision-making.
2. Graph neural network: The neural network is combined with the traditional graph structure, oriented to cognitive reasoning and future trends.
3. Model innovation combining traditional models and neural networks is a research hotspot.
* Model innovation combining traditional models and neural networks is a research hotspot.
* Deep probability model innovation: through the combination of neural network and probability model, the model can better help decision-making.
* Graph neural network: The neural network is combined with the traditional graph structure, oriented to cognitive reasoning and future trends.
## Target:
- Complete probability sampling library and probability inference (learning the probability distribution of the overall sample through known samples) algorithm library
- Design new algorithms for dynamically changing heterogeneous graphs (different feature dimensions and different information aggregation methods)
- Trillion distributed graph data storage, segmentation and sampling
* Complete probability sampling library and probability inference (learning the probability distribution of the overall sample through known samples) algorithm library.
* Design new algorithms for dynamically changing heterogeneous graphs (different feature dimensions and different information aggregation methods)
* Trillion distributed graph data storage, segmentation and sampling
## Method:
​ We expect the applicant can conduct model innovation research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support.
​We expect the applicant can conduct model innovation research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support.
## How To Join:
1. Submit an issue/PR based on community discussion for consultation or claim on related topics
2. Submit your proposal to us by email xxx@huawei.com
\ No newline at end of file
* Submit an issue/PR based on community discussion for consultation or claim on related topics
* Submit your proposal to us by email wang1@huawei.com
\ No newline at end of file
# Topic4:AI for Scientific Computing
## Motivation:
* AI modeling :AI automatic modeling can effectively improve modeling efficiency, and convergence analysis can improve model reliability and ensure simple and safe use by users.
* AI solution:The calculation amount of high-order differential increases exponentially with the parameter and the order. We can design neural network models to solve such classic problems.
* AI modeling:AI automatic modeling can effectively improve modeling efficiency of scientific calculations, and convergence analysis can improve model reliability and ensure simple and safe use by users.
* AI solution:The calculation amount of high-order differential increases exponentially with the parameter and the order. We can design neural network models to solve such classic problems.
## Target:
* AI modeling:Construct a neural network, training data and Loss function for scientific computing problems.
* AI modeling:Construct a neural network, training data and loss function for scientific computing problems.
* AI solution:AI model solves differential equations, solves optimization problems, achieve the goal that the amount of high-order automatic differential calculation increases linearly with the order.
## Method:
​ We expect the applicant can conduct AI for scientific computing research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support.
​We expect the applicant can conduct AI for scientific computing research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support.
## How To Join:
1. Submit an issue/PR based on community discussion for consultation or claim on related topics
2. Submit your proposal to us by email xxx@huawei.com
\ No newline at end of file
* Submit an issue/PR based on community discussion for consultation or claim on related topics
* Submit your proposal to us by email wang1@huawei.com
\ No newline at end of file
# Topic5: Verifiable Trustworthy AI
# Topic5: Verifiable/certifiable Trustworthy AI
## Motivation:
- Many aspects of trustworthy AI (or responsible AI), such as robustness, backdoor-free, fairness, privacy protection capabilities, and accountability, have gradually attracted the attention of the industry and academia.
- Scholars' understanding and research on the attributes of trustworthy AI are mostly empirical, and there are few theoretical studies. The verifiable and certifiable analysis, tuning, and evaluation methods of trustworthy AI attributes and bounds, and their relation to explainable AI, require theoretical guidance.
* Many aspects of trustworthy AI (or responsible AI), such as adversarial robustness, backdoor, fairness, privacy protection capabilities, and accountability, have gradually attracted the attention of the industry and academia.
* Current study on trustworthy AI are mostly empirical, and there are few theoretical studies. The verifiable and certifiable trustworthy AI, especially its training, evaluation and test methods, and its underlying insight related with explainable AI, requires theoretical guidance.
## Target:
​ Propose verifiable and certifiable research mechanism and evaluation system on trustworthy AI.
​Propose research on mechanism and evaluation system of verifiable or certifiable trustworthy AI.
## Method:
​We expect the applicant can conduct Verifiable Trustworthy AI research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support.
​ We expect the applicant can conduct Verifiable Trustworthy AI research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support.
## How To Join
1. Submit an issue/PR based on community discussion for consultation or claim on related topics
2. Submit your proposal to us by email xxx@huawei.com
## How To Join:
* Submit an issue/PR based on community discussion for consultation or claim on related topics
* Submit your proposal to us by email wangze14@huawei.com
# Topic6: Confidential AI Computing
## Motivation:
- In the training and deployment process of AI services, several vital resources such as data, models, and computing resources may belong to different parties, so a large amount of data will move across trust domains. The problems of data privacy protection and model confidentiality protection are prominent.
- Confidential computing is an important direction to protect the confidentiality of key data. At present, confidential computing based on trusted execution environment has performance advantages, but its trust model is limited; the trust model of confidential computing based on cryptography (homomorphic encryption, multi-party computing) is simple, but there is still a gap between performance and practicality.
- A series of specialized optimizations may improve the performance of confidential computing in AI scenarios, including but not limited to: cryptography suitable for AI, specialized intermediate representation and compling strategy for confidential AI computing, and hardware-based acceleration.
* In the training and deployment process of AI services, several vital resources such as data, models, and computing resources may belong to different parties. A large amount of data will move across trust domains. The problems of data privacy protection and model confidentiality protection are prominent.
* Confidential computing is an important direction to protect the confidentiality of key data. At present, confidential computing based on trusted execution environment has performance advantages, but its trust model is limited; confidential computing based on cryptography has simple trust model, but there is still a gap between performance and practicality.
* A series of specialized optimizations may improve the performance of confidential computing in AI scenarios, including but not limited to: cryptography suitable for AI, specialized intermediate representation and compiling strategy, and hardware-based acceleration.
## Target:
​ Realize an AI on Encrypted Data & Model computing framework with feasible, flexible and efficient performance in actual AI application scenarios, or key technologies.
Realize an Confidential AI Computing framework, or its key technologies, with feasible, flexible and efficient performance in actual AI application scenarios.
## Method:
​We expect the applicant can conduct Confidential AI Computing research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support.
​ We expect the applicant can conduct Confidential AI Computing research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support.
## How To Join
1. Submit an issue/PR based on community discussion for consultation or claim on related topics
2. Submit your proposal to us by email xxx@huawei.com
\ No newline at end of file
## How To Join:
* Submit an issue/PR based on community discussion for consultation or claim on related topics
* Submit your proposal to us by email wangze14@huawei.com
\ No newline at end of file
# Topic7: Tensor Differentiable Calculation Framework
# Topic7: Tensor Differentiable Computing Framework
## Motivation:
* The new network model poses challenges to the IR expression, optimization and execution of the deep learning framework, including the introduction of a new op abstraction level, and the dynamics of the model.
* Third-party high-performance computing languages or frameworks are accelerated, and there is an urgent need for a more versatile and open tensor computing framework and API design
* The acceleration of third-party high-performance computing languages ​​or frameworks urgently requires a more versatile and open tensor computing framework and API design.
* The technical challenges of unified optimization of the model layer and the operator layer, including hierarchical IR design, optimization of infrastructure, automatic tuning, loop optimization, etc.
* Differential equations are solved with a large number of differentials, which have high requirements for the differential expression of the framework, interface design, algorithm analysis efficiency and reliability.
##Target:
​ Driven by cutting-edge applications, from the perspectives of new models, dynamic models, high-performance computing languages, etc., study the evolution direction and key technology paths of future computing frameworks. For example, it supports differentiable programming of high-order differentiation and is compatible with traditional Fortran/C numerical calculation framework.
##Method:
​ We expect the applicant can conduct tensor differentiable calculation framework research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support.
## Target:
​Driven by cutting-edge applications, from the perspectives of new models, dynamic models, high-performance computing languages, etc., study the evolution direction and key technology paths of future computing frameworks. For example, it supports differentiable programming of high-order differentiation.
## How To Join
## Method:
​We expect the applicant can conduct tensor differentiable Computing framework research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support.
1. Submit an issue/PR based on community discussion for consultation or claim on related topics
2. Submit your proposal to us by email xxx@huawei.com
\ No newline at end of file
## How To Join:
* Submit an issue/PR based on community discussion for consultation or claim on related topics
* Submit your proposal to us by email peng.yuanpeng@huawei.com
\ No newline at end of file
# Topic8: Distributed And Parallel AI Computing Framework
## Motivation:
* The scale and complexity of models are getting higher and higher, such as GPT-3 with 175 billion parameters, millions of face recognition, and tens of billions of feature recommendations.
* It is difficult to split the model manually. For example, developers need to combine information such as calculation amount, cluster size, communication bandwidth, and network topology to construct a parallel mode.
* The expression of the parallel mode lacks adaptability, and the simple graph-level model segmentation cannot obtain high-efficiency speedup. It requires the decoupling of algorithm logic and parallel logic.
## Target:
​ Driven by super-large models, research key technologies for accelerating distributed training, including but not limited to automatic parallelism, hybrid parallelism, memory optimization, and elastic scaling. Such as achieving heterogeneous automatic parallel efficiency and linear speedup.
​Driven by super-large models, research key technologies for accelerating distributed training, including but not limited to automatic parallelism, hybrid parallelism, memory optimization, and elastic scaling. Such as achieving heterogeneous automatic parallel efficiency and linear speedup.
## Method:
​We expect the applicant can conduct distributed and parallel AI computing framework research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support.
​ We expect the applicant can conduct distributed and parallel AI computing framework research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support.
## How To Join
1. Submit an issue/PR based on community discussion for consultation or claim on related topics
2. Submit your proposal to us by email xxx@huawei.com
\ No newline at end of file
## How To Join:
* Submit an issue/PR based on community discussion for consultation or claim on related topics
* Submit your proposal to us by email peng.yuanpeng@huawei.com
\ No newline at end of file
# Topic9:Explainable AI
# Topic9:XAI in line with human cognitive
##Motivation:
## Motivation:
* The current deep learning model is essentially black box due to its technical complexity , which leads to the opacity and inexplicability of AI services and further restricts the commercial application and promotion of AI services. Existing explainable AI technology mainly focuses on how to provide limited engineering auxiliary information to the model, but ignores the understanding of AI models from the perspective of human cognition.
* Humans usually understand things through analogies, metaphors, induction and other cognitive methods, and have a certain process of mental cognition construction. Thus, in this project, we expect to be able to explore more systematic and explainable AI methods that conform to human cognition, including interactive interfaces, explainable methods, measurement methods, and so on.
​ The current deep learning model is essentially black box due to its technical complexity , which leads to the opacity and inexplicability of AI services and further restricts the commercial application and promotion of AI services. Existing interpretable AI technology mainly focuses on how to provide limited engineering auxiliary information to the model, but ignores the understanding of AI models from the perspective of human cognition
​ Humans usually understand things through analogies, metaphors, induction and other cognitive methods, and have a certain process of mental cognition construction. Thus, in this project, we expect to be able to explore more systematic and interpretable AI methods that conform to human cognition, including interactive interfaces, interpretation methods, measurement methods, and so on.
##Target:
​ A complete set of interpretable AI methods and strategies in line with human cognition, providing necessary interactive cognitive interface design solutions for different scenarios and different cognitions, and a case study for typical scenarios.
## Target:
A complete set of explainable AI methods and strategies in line with human cognition, providing necessary interactive cognitive interface design solutions for different scenarios and different cognitions, and a case study for typical scenarios.
## Method:
​ We expect the applicant can conduct XAI research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support.
​We expect the applicant can conduct XAI research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support.
## How To Join:
1. Submit an issue/PR based on community discussion for consultation or claim on related topics
2. Submit your proposal to us by email xxx@huawei.com
\ No newline at end of file
* Submit an issue/PR based on community discussion for consultation or claim on related topics
* Submit your proposal to us by email roc.wangyunpeng@huawei.com
\ No newline at end of file
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册