From 919c4409847c09baa96d1b3a54c73dc3972d4503 Mon Sep 17 00:00:00 2001 From: leonwanghui Date: Wed, 19 Aug 2020 11:26:50 +0800 Subject: [PATCH] Fix some typo warnings in research WG folder --- ...atic-Model-Optimization-Recommendation.md} | 30 ++++++++-------- ...Topic1_Low-bit-Neural-Networks-Training.md | 4 +-- .../research/Topic2_Memory-Optimization.md | 2 +- .../research/Topic3_Model-Innovation.md | 4 +-- .../Topic4_AI-for-Scientific-Computing.md | 6 ++-- .../Topic5_Verifiable-Trustworthy-AI.md | 4 +-- .../Topic6_Confidential-AI-Computing.md | 4 +-- ...sor-Differentiable-Computing-Framework.md} | 34 +++++++++---------- ...ted-and-Parallel-AI-Computing-Framework.md | 2 +- ...opic9_XAI-in-line-with-Human-Cognitive.md} | 31 ++++++++--------- 10 files changed, 60 insertions(+), 61 deletions(-) rename working-groups/research/{Topic10_AutoML.md => Topic10_Automatic-Model-Optimization-Recommendation.md} (94%) rename working-groups/research/{Topic7_Tensor-Differentiable-Calculation-Framework.md => Topic7_Tensor-Differentiable-Computing-Framework.md} (92%) rename working-groups/research/{Topic9_Explainable-AI.md => Topic9_XAI-in-line-with-Human-Cognitive.md} (95%) diff --git a/working-groups/research/Topic10_AutoML.md b/working-groups/research/Topic10_Automatic-Model-Optimization-Recommendation.md similarity index 94% rename from working-groups/research/Topic10_AutoML.md rename to working-groups/research/Topic10_Automatic-Model-Optimization-Recommendation.md index f35e167..9b2b847 100644 --- a/working-groups/research/Topic10_AutoML.md +++ b/working-groups/research/Topic10_Automatic-Model-Optimization-Recommendation.md @@ -1,15 +1,15 @@ -# Topic10:Automatic model optimization recommendation - -## Motivation: -* Nowadays, training a high accuracy and high performance model often requires rich expert knowledge and repeated iterative attempts. AutoML makes it easier to apply and reduce the demand for experienced human experts, however, there are still some difficulties in setting search space which lead to large search spaces and long training time. If we can combine the iterative history of user training and analyze historical training data, a lite hyper-parameter recommendation method can be realized, which can greatly improve the developer experience. -* Meanwhile, there are similar problems for model performance tuning, In different heterogeneous hardware, models, and data processing scenarios, expert knowledge is also required. Therefore, we aim to reduce the performance tuning threshold by automatically identifying system performance bottlenecks and recommending the best code path. -​ -## Target: -This feature automatically recommends optimized hyper-parameter configurations and performance optimization paths, reducing the threshold for model development and use and improving the model debugging and optimization efficiency. -​ -## Method: -​We expect the applicant can conduct Automatic model optimization recommendation research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. - -## How To Join: -* Submit an issue/PR based on community discussion for consultation or claim on related topics -* Submit your proposal to us by email roc.wangyunpeng@huawei.com \ No newline at end of file +# Topic10:Automatic model optimization recommendation + +## Motivation: +* Nowadays, training a high accuracy and high performance model often requires rich expert knowledge and repeated iterative attempts. AutoML makes it easier to apply and reduce the demand for experienced human experts, however, there are still some difficulties in setting search space which lead to large search spaces and long training time. If we can combine the iterative history of user training and analyze historical training data, a lite hyper-parameter recommendation method can be realized, which can greatly improve the developer experience. +* Meanwhile, there are similar problems for model performance tuning, In different heterogeneous hardware, models, and data processing scenarios, expert knowledge is also required. Therefore, we aim to reduce the performance tuning threshold by automatically identifying system performance bottlenecks and recommending the best code path. +​ +## Target: +This feature automatically recommends optimized hyper-parameter configurations and performance optimization paths, reducing the threshold for model development and use and improving the model debugging and optimization efficiency. + +## Method: +​We expect the applicant can conduct Automatic model optimization recommendation research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. + +## How To Join: +* Submit an issue/PR based on community discussion for consultation or claim on related topics +* Submit your proposal to us by email diff --git a/working-groups/research/Topic1_Low-bit-Neural-Networks-Training.md b/working-groups/research/Topic1_Low-bit-Neural-Networks-Training.md index b0e9d87..bc864ce 100644 --- a/working-groups/research/Topic1_Low-bit-Neural-Networks-Training.md +++ b/working-groups/research/Topic1_Low-bit-Neural-Networks-Training.md @@ -1,4 +1,4 @@ -# Topic1: Low-bit Neural Networks Training +# Topic1: Low-bit Neural Networks Training ## Motivation: ​At present, mixed precision can automatically adjust the accuracy of fp16 and fp32 for the network to improve training performance and memory optimization. Because operators have different costs on different AI chips, all optimization strategies for different AI chips are different. The network configuration of different hardware is different, so how to automatically generate the precision adjustment strategy that adapts to various hardware, especially the low bit strategy has become a difficult problem. @@ -13,4 +13,4 @@ ## How To Join: * Submit an issue/PR based on community discussion for consultation or claim on related topics -* Submit your proposal to us by email baochong@huawei.com \ No newline at end of file +* Submit your proposal to us by email diff --git a/working-groups/research/Topic2_Memory-Optimization.md b/working-groups/research/Topic2_Memory-Optimization.md index 29a0b47..8900401 100644 --- a/working-groups/research/Topic2_Memory-Optimization.md +++ b/working-groups/research/Topic2_Memory-Optimization.md @@ -14,4 +14,4 @@ Adaptive search memory optimization strategy to find the best balance between re ## How To Join: * Submit an issue/PR based on community discussion for consultation or claim on related topics -* Submit your proposal to us by email baochong@huawei.com \ No newline at end of file +* Submit your proposal to us by email diff --git a/working-groups/research/Topic3_Model-Innovation.md b/working-groups/research/Topic3_Model-Innovation.md index 93929d3..13fba7b 100644 --- a/working-groups/research/Topic3_Model-Innovation.md +++ b/working-groups/research/Topic3_Model-Innovation.md @@ -11,8 +11,8 @@ * Trillion distributed graph data storage, segmentation and sampling ## Method: -​We expect the applicant can conduct model innovation research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. +​We expect the applicant can conduct model innovation research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. ## How To Join: * Submit an issue/PR based on community discussion for consultation or claim on related topics -* Submit your proposal to us by email wang1@huawei.com \ No newline at end of file +* Submit your proposal to us by email diff --git a/working-groups/research/Topic4_AI-for-Scientific-Computing.md b/working-groups/research/Topic4_AI-for-Scientific-Computing.md index a053ab2..b911828 100644 --- a/working-groups/research/Topic4_AI-for-Scientific-Computing.md +++ b/working-groups/research/Topic4_AI-for-Scientific-Computing.md @@ -1,11 +1,11 @@ -# Topic4:AI for Scientific Computing +# Topic4:AI for Scientific Computing ## Motivation: * AI modeling:AI automatic modeling can effectively improve modeling efficiency of scientific calculations, and convergence analysis can improve model reliability and ensure simple and safe use by users. * AI solution:The calculation amount of high-order differential increases exponentially with the parameter and the order. We can design neural network models to solve such classic problems. ## Target: - * AI modeling:Construct a neural network, training data and loss function for scientific computing problems. + * AI modeling:Construct a neural network, training data and loss function for scientific computing problems. * AI solution:AI model solves differential equations, solves optimization problems, achieve the goal that the amount of high-order automatic differential calculation increases linearly with the order. ## Method: @@ -13,4 +13,4 @@ ## How To Join: * Submit an issue/PR based on community discussion for consultation or claim on related topics -* Submit your proposal to us by email wang1@huawei.com \ No newline at end of file +* Submit your proposal to us by email diff --git a/working-groups/research/Topic5_Verifiable-Trustworthy-AI.md b/working-groups/research/Topic5_Verifiable-Trustworthy-AI.md index a66b196..7ad10de 100644 --- a/working-groups/research/Topic5_Verifiable-Trustworthy-AI.md +++ b/working-groups/research/Topic5_Verifiable-Trustworthy-AI.md @@ -1,4 +1,4 @@ -# Topic5: Verifiable/certifiable Trustworthy AI +# Topic5: Verifiable/certifiable Trustworthy AI ## Motivation: * Many aspects of trustworthy AI (or responsible AI), such as adversarial robustness, backdoor, fairness, privacy protection capabilities, and accountability, have gradually attracted the attention of the industry and academia. @@ -12,4 +12,4 @@ ## How To Join: * Submit an issue/PR based on community discussion for consultation or claim on related topics -* Submit your proposal to us by email wangze14@huawei.com +* Submit your proposal to us by email diff --git a/working-groups/research/Topic6_Confidential-AI-Computing.md b/working-groups/research/Topic6_Confidential-AI-Computing.md index 88b15da..ebf9de1 100644 --- a/working-groups/research/Topic6_Confidential-AI-Computing.md +++ b/working-groups/research/Topic6_Confidential-AI-Computing.md @@ -1,4 +1,4 @@ -# Topic6: Confidential AI Computing +# Topic6: Confidential AI Computing ## Motivation: * In the training and deployment process of AI services, several vital resources such as data, models, and computing resources may belong to different parties. A large amount of data will move across trust domains. The problems of data privacy protection and model confidentiality protection are prominent. @@ -13,4 +13,4 @@ Realize an Confidential AI Computing framework, or its key technologies, with f ## How To Join: * Submit an issue/PR based on community discussion for consultation or claim on related topics -* Submit your proposal to us by email wangze14@huawei.com \ No newline at end of file +* Submit your proposal to us by email diff --git a/working-groups/research/Topic7_Tensor-Differentiable-Calculation-Framework.md b/working-groups/research/Topic7_Tensor-Differentiable-Computing-Framework.md similarity index 92% rename from working-groups/research/Topic7_Tensor-Differentiable-Calculation-Framework.md rename to working-groups/research/Topic7_Tensor-Differentiable-Computing-Framework.md index cd98416..9fabaee 100644 --- a/working-groups/research/Topic7_Tensor-Differentiable-Calculation-Framework.md +++ b/working-groups/research/Topic7_Tensor-Differentiable-Computing-Framework.md @@ -1,17 +1,17 @@ -# Topic7: Tensor Differentiable Computing Framework - -## Motivation: -* The new network model poses challenges to the IR expression, optimization and execution of the deep learning framework, including the introduction of a new op abstraction level, and the dynamics of the model. -* The acceleration of third-party high-performance computing languages ​​or frameworks urgently requires a more versatile and open tensor computing framework and API design. -* The technical challenges of unified optimization of the model layer and the operator layer, including hierarchical IR design, optimization of infrastructure, automatic tuning, loop optimization, etc. -* Differential equations are solved with a large number of differentials, which have high requirements for the differential expression of the framework, interface design, algorithm analysis efficiency and reliability. - -## Target: -​Driven by cutting-edge applications, from the perspectives of new models, dynamic models, high-performance computing languages, etc., study the evolution direction and key technology paths of future computing frameworks. For example, it supports differentiable programming of high-order differentiation. - -## Method: -​We expect the applicant can conduct tensor differentiable Computing framework research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. - -## How To Join: -* Submit an issue/PR based on community discussion for consultation or claim on related topics -* Submit your proposal to us by email peng.yuanpeng@huawei.com \ No newline at end of file +# Topic7: Tensor Differentiable Computing Framework + +## Motivation: +* The new network model poses challenges to the IR expression, optimization and execution of the deep learning framework, including the introduction of a new op abstraction level, and the dynamics of the model. +* The acceleration of third-party high-performance computing languages ​​or frameworks urgently requires a more versatile and open tensor computing framework and API design. +* The technical challenges of unified optimization of the model layer and the operator layer, including hierarchical IR design, optimization of infrastructure, automatic tuning, loop optimization, etc. +* Differential equations are solved with a large number of differentials, which have high requirements for the differential expression of the framework, interface design, algorithm analysis efficiency and reliability. + +## Target: +​Driven by cutting-edge applications, from the perspectives of new models, dynamic models, high-performance computing languages, etc., study the evolution direction and key technology paths of future computing frameworks. For example, it supports differentiable programming of high-order differentiation. + +## Method: +​We expect the applicant can conduct tensor differentiable Computing framework research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. + +## How To Join: +* Submit an issue/PR based on community discussion for consultation or claim on related topics +* Submit your proposal to us by email diff --git a/working-groups/research/Topic8_Distributed-and-Parallel-AI-Computing-Framework.md b/working-groups/research/Topic8_Distributed-and-Parallel-AI-Computing-Framework.md index 91f99ab..7b04d91 100644 --- a/working-groups/research/Topic8_Distributed-and-Parallel-AI-Computing-Framework.md +++ b/working-groups/research/Topic8_Distributed-and-Parallel-AI-Computing-Framework.md @@ -13,4 +13,4 @@ ## How To Join: * Submit an issue/PR based on community discussion for consultation or claim on related topics -* Submit your proposal to us by email peng.yuanpeng@huawei.com \ No newline at end of file +* Submit your proposal to us by email diff --git a/working-groups/research/Topic9_Explainable-AI.md b/working-groups/research/Topic9_XAI-in-line-with-Human-Cognitive.md similarity index 95% rename from working-groups/research/Topic9_Explainable-AI.md rename to working-groups/research/Topic9_XAI-in-line-with-Human-Cognitive.md index 286f270..1bb63e4 100644 --- a/working-groups/research/Topic9_Explainable-AI.md +++ b/working-groups/research/Topic9_XAI-in-line-with-Human-Cognitive.md @@ -1,16 +1,15 @@ -# Topic9:XAI in line with human cognitive - -## Motivation: -* The current deep learning model is essentially black box due to its technical complexity , which leads to the opacity and inexplicability of AI services and further restricts the commercial application and promotion of AI services. Existing explainable AI technology mainly focuses on how to provide limited engineering auxiliary information to the model, but ignores the understanding of AI models from the perspective of human cognition. -* Humans usually understand things through analogies, metaphors, induction and other cognitive methods, and have a certain process of mental cognition construction. Thus, in this project, we expect to be able to explore more systematic and explainable AI methods that conform to human cognition, including interactive interfaces, explainable methods, measurement methods, and so on. - - -## Target: -A complete set of explainable AI methods and strategies in line with human cognition, providing necessary interactive cognitive interface design solutions for different scenarios and different cognitions, and a case study for typical scenarios. - -## Method: -​We expect the applicant can conduct XAI research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. - -## How To Join: -* Submit an issue/PR based on community discussion for consultation or claim on related topics -* Submit your proposal to us by email roc.wangyunpeng@huawei.com \ No newline at end of file +# Topic9:XAI in line with human cognitive + +## Motivation: +* The current deep learning model is essentially black box due to its technical complexity , which leads to the opacity and inexplicability of AI services and further restricts the commercial application and promotion of AI services. Existing explainable AI technology mainly focuses on how to provide limited engineering auxiliary information to the model, but ignores the understanding of AI models from the perspective of human cognition. +* Humans usually understand things through analogies, metaphors, induction and other cognitive methods, and have a certain process of mental cognition construction. Thus, in this project, we expect to be able to explore more systematic and explainable AI methods that conform to human cognition, including interactive interfaces, explainable methods, measurement methods, and so on. + +## Target: +A complete set of explainable AI methods and strategies in line with human cognition, providing necessary interactive cognitive interface design solutions for different scenarios and different cognitions, and a case study for typical scenarios. + +## Method: +​We expect the applicant can conduct XAI research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. + +## How To Join: +* Submit an issue/PR based on community discussion for consultation or claim on related topics +* Submit your proposal to us by email -- GitLab