diff --git a/working-groups/research/Topic10_AutoML.md b/working-groups/research/Topic10_AutoML.md index dc5931c57d375151d9de11a229d092835f6e6b60..f35e167c8acd2733fc74ebd64d785e33899776a4 100644 --- a/working-groups/research/Topic10_AutoML.md +++ b/working-groups/research/Topic10_AutoML.md @@ -1,20 +1,15 @@ -# Topic10:AutoML +# Topic10:Automatic model optimization recommendation ## Motivation: - -​ Nowadays, training a model that meets the accuracy requirements often requires rich expert knowledge and repeated iterative attempts. Although there is AutoML technology, there are still problems of difficult search space setting and long training time for large search spaces. If you can combine the iterative history of user training and analyze historical data, a lightweight hyperparameter recommendation method can be realized, which can greatly improve the developer experience. - -​ Similarly, for performance tuning, there are similar problems. In different heterogeneous hardware, models, and data processing scenarios, expert knowledge is required for tuning. Therefore, we aim to reduce the performance tuning threshold by automatically identifying system performance bottlenecks and recommending the best code path. - +* Nowadays, training a high accuracy and high performance model often requires rich expert knowledge and repeated iterative attempts. AutoML makes it easier to apply and reduce the demand for experienced human experts, however, there are still some difficulties in setting search space which lead to large search spaces and long training time. If we can combine the iterative history of user training and analyze historical training data, a lite hyper-parameter recommendation method can be realized, which can greatly improve the developer experience. +* Meanwhile, there are similar problems for model performance tuning, In different heterogeneous hardware, models, and data processing scenarios, expert knowledge is also required. Therefore, we aim to reduce the performance tuning threshold by automatically identifying system performance bottlenecks and recommending the best code path. +​ ## Target: - -​ Reduce model development cost, set up thresholds through automatic hyper-parameter configuration and performance optimization paths, and improve model debugging and optimization efficiency. - +This feature automatically recommends optimized hyper-parameter configurations and performance optimization paths, reducing the threshold for model development and use and improving the model debugging and optimization efficiency. +​ ## Method: - -​ We expect the applicant can conduct AutoML research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. +​We expect the applicant can conduct Automatic model optimization recommendation research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. ## How To Join: - -1. Submit an issue/PR based on community discussion for consultation or claim on related topics -2. Submit your proposal to us by email xxx@huawei.com \ No newline at end of file +* Submit an issue/PR based on community discussion for consultation or claim on related topics +* Submit your proposal to us by email roc.wangyunpeng@huawei.com \ No newline at end of file diff --git a/working-groups/research/Topic1_Low-bit-Neural-Networks-Training.md b/working-groups/research/Topic1_Low-bit-Neural-Networks-Training.md index 4d4c30e6da537a812319650445d84288ee88129d..b0e9d8737ee3da6d27456d1b9367007cdc30e378 100644 --- a/working-groups/research/Topic1_Low-bit-Neural-Networks-Training.md +++ b/working-groups/research/Topic1_Low-bit-Neural-Networks-Training.md @@ -1,20 +1,16 @@ # Topic1: Low-bit Neural Networks Training ## Motivation: - -​ At present, mixed precision can automatically adjust the accuracy of fp16 and fp32 for the network to improve training performance and memory optimization. Because operators have different costs on different AI chips, all optimization strategies for different AI chips are different. The network configuration of different hardware is different, so how to automatically generate the precision adjustment strategy that adapts to various hardware, especially the low bit strategy has become a difficult problem. +​At present, mixed precision can automatically adjust the accuracy of fp16 and fp32 for the network to improve training performance and memory optimization. Because operators have different costs on different AI chips, all optimization strategies for different AI chips are different. The network configuration of different hardware is different, so how to automatically generate the precision adjustment strategy that adapts to various hardware, especially the low bit strategy has become a difficult problem. ## Target: - -​ Self-adaptively provides a low-bit precision training mechanism for various networks. +​Self-adaptively provides a low-bit precision training mechanism for various networks. ![target](target.PNG) ## Method: +​We expect the applicant can conduct low-bit neural networks training research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. -​ We expect the applicant can conduct low-bit neural networks training research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. - -## How To Join - -1. Submit an issue/PR based on community discussion for consultation or claim on related topics -2. Submit your proposal to us by email xxx@huawei.com \ No newline at end of file +## How To Join: +* Submit an issue/PR based on community discussion for consultation or claim on related topics +* Submit your proposal to us by email baochong@huawei.com \ No newline at end of file diff --git a/working-groups/research/Topic2_Memory-Optimization.md b/working-groups/research/Topic2_Memory-Optimization.md index 074cb61aed843db729e6db0bbc8a33270d50c5a2..29a0b47890abe506bbf0da2f11d1a30a76df5d92 100644 --- a/working-groups/research/Topic2_Memory-Optimization.md +++ b/working-groups/research/Topic2_Memory-Optimization.md @@ -1,22 +1,17 @@ # Topic2: Memory Optimization ## Motivation: - -​ There are many strategies for memory optimization, such as recalculation and host-device memory switching. These strategies further break through the memory bottleneck by increasing the amount of calculation and increase the batchsize. Increasing batchsize can often improve the utilization of GPU and NPU to improve throughput performance. +​There are many strategies for memory optimization, such as recalculation and host-device memory switching. These strategies further break through the memory bottleneck by increasing the amount of calculation and increase the batchsize. Increasing batchsize can often improve the utilization of GPU and NPU to improve throughput performance. ## Target: -* Adaptive search memory optimization strategy to optimize the overall network performance. - -* Or provide a methodological strategy. +Adaptive search memory optimization strategy to find the best balance between recalculation overhead and memory optimization benefits, so as to optimize the overall network performance. - ![memor_opt](memor_opt.PNG) +![memor_opt](memor_opt.PNG) ## Method: +​We expect the applicant can conduct memory optimization research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. -​ We expect the applicant can conduct memory optimization research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. - -## How To Join - -1. Submit an issue/PR based on community discussion for consultation or claim on related topics -2. Submit your proposal to us by email xxx@huawei.com \ No newline at end of file +## How To Join: +* Submit an issue/PR based on community discussion for consultation or claim on related topics +* Submit your proposal to us by email baochong@huawei.com \ No newline at end of file diff --git a/working-groups/research/Topic3_Model-Innovation.md b/working-groups/research/Topic3_Model-Innovation.md index fa59dc06e16ff7de06304f1a091f3f0dec5a730f..93929d3d257d9988d39fb4e9d5bd428ee3beb14b 100644 --- a/working-groups/research/Topic3_Model-Innovation.md +++ b/working-groups/research/Topic3_Model-Innovation.md @@ -1,22 +1,18 @@ # Topic3:Model Innovation ## Motivation: - -1. In-depth probability model innovation: through the combination of neural network and probability model, the model can better help decision-making. -2. Graph neural network: The neural network is combined with the traditional graph structure, oriented to cognitive reasoning and future trends. -3. Model innovation combining traditional models and neural networks is a research hotspot. +* Model innovation combining traditional models and neural networks is a research hotspot. +* Deep probability model innovation: through the combination of neural network and probability model, the model can better help decision-making. +* Graph neural network: The neural network is combined with the traditional graph structure, oriented to cognitive reasoning and future trends. ## Target: - -- Complete probability sampling library and probability inference (learning the probability distribution of the overall sample through known samples) algorithm library -- Design new algorithms for dynamically changing heterogeneous graphs (different feature dimensions and different information aggregation methods) -- Trillion distributed graph data storage, segmentation and sampling +* Complete probability sampling library and probability inference (learning the probability distribution of the overall sample through known samples) algorithm library. +* Design new algorithms for dynamically changing heterogeneous graphs (different feature dimensions and different information aggregation methods) +* Trillion distributed graph data storage, segmentation and sampling ## Method: - -​ We expect the applicant can conduct model innovation research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. +​We expect the applicant can conduct model innovation research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. ## How To Join: - -1. Submit an issue/PR based on community discussion for consultation or claim on related topics -2. Submit your proposal to us by email xxx@huawei.com \ No newline at end of file +* Submit an issue/PR based on community discussion for consultation or claim on related topics +* Submit your proposal to us by email wang1@huawei.com \ No newline at end of file diff --git a/working-groups/research/Topic4_AI-for-Scientific-Computing.md b/working-groups/research/Topic4_AI-for-Scientific-Computing.md index 337573e15c0a5930e3c751a89b5e756a65883999..a053ab2c30256e5446203be6812a7c7ebfd9950e 100644 --- a/working-groups/research/Topic4_AI-for-Scientific-Computing.md +++ b/working-groups/research/Topic4_AI-for-Scientific-Computing.md @@ -1,20 +1,16 @@ # Topic4:AI for Scientific Computing ## Motivation: - -* AI modeling :AI automatic modeling can effectively improve modeling efficiency, and convergence analysis can improve model reliability and ensure simple and safe use by users. -* AI solution:The calculation amount of high-order differential increases exponentially with the parameter and the order. We can design neural network models to solve such classic problems. +* AI modeling:AI automatic modeling can effectively improve modeling efficiency of scientific calculations, and convergence analysis can improve model reliability and ensure simple and safe use by users. +* AI solution:The calculation amount of high-order differential increases exponentially with the parameter and the order. We can design neural network models to solve such classic problems. ## Target: - - * AI modeling:Construct a neural network, training data and Loss function for scientific computing problems. + * AI modeling:Construct a neural network, training data and loss function for scientific computing problems. * AI solution:AI model solves differential equations, solves optimization problems, achieve the goal that the amount of high-order automatic differential calculation increases linearly with the order. ## Method: - -​ We expect the applicant can conduct AI for scientific computing research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. +​We expect the applicant can conduct AI for scientific computing research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. ## How To Join: - -1. Submit an issue/PR based on community discussion for consultation or claim on related topics -2. Submit your proposal to us by email xxx@huawei.com \ No newline at end of file +* Submit an issue/PR based on community discussion for consultation or claim on related topics +* Submit your proposal to us by email wang1@huawei.com \ No newline at end of file diff --git a/working-groups/research/Topic5_Verifiable-Trustworthy-AI.md b/working-groups/research/Topic5_Verifiable-Trustworthy-AI.md index 00056f5b19f45a7cf650f0120559b18bc5b78e16..a66b196642d3d0898af46cf6f498f6734db9fc0a 100644 --- a/working-groups/research/Topic5_Verifiable-Trustworthy-AI.md +++ b/working-groups/research/Topic5_Verifiable-Trustworthy-AI.md @@ -1,19 +1,15 @@ -# Topic5: Verifiable Trustworthy AI +# Topic5: Verifiable/certifiable Trustworthy AI ## Motivation: - -- Many aspects of trustworthy AI (or responsible AI), such as robustness, backdoor-free, fairness, privacy protection capabilities, and accountability, have gradually attracted the attention of the industry and academia. -- Scholars' understanding and research on the attributes of trustworthy AI are mostly empirical, and there are few theoretical studies. The verifiable and certifiable analysis, tuning, and evaluation methods of trustworthy AI attributes and bounds, and their relation to explainable AI, require theoretical guidance. +* Many aspects of trustworthy AI (or responsible AI), such as adversarial robustness, backdoor, fairness, privacy protection capabilities, and accountability, have gradually attracted the attention of the industry and academia. +* Current study on trustworthy AI are mostly empirical, and there are few theoretical studies. The verifiable and certifiable trustworthy AI, especially its training, evaluation and test methods, and its underlying insight related with explainable AI, requires theoretical guidance. ## Target: - -​ Propose verifiable and certifiable research mechanism and evaluation system on trustworthy AI. +​Propose research on mechanism and evaluation system of verifiable or certifiable trustworthy AI. ## Method: +​We expect the applicant can conduct Verifiable Trustworthy AI research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. -​ We expect the applicant can conduct Verifiable Trustworthy AI research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. - -## How To Join - -1. Submit an issue/PR based on community discussion for consultation or claim on related topics -2. Submit your proposal to us by email xxx@huawei.com +## How To Join: +* Submit an issue/PR based on community discussion for consultation or claim on related topics +* Submit your proposal to us by email wangze14@huawei.com diff --git a/working-groups/research/Topic6_Confidential-AI-Computing.md b/working-groups/research/Topic6_Confidential-AI-Computing.md index e8df265aaafa29389f23200196799785f969bd02..88b15da8887e41765c4cd89bd3d11566a99afd4f 100644 --- a/working-groups/research/Topic6_Confidential-AI-Computing.md +++ b/working-groups/research/Topic6_Confidential-AI-Computing.md @@ -1,20 +1,16 @@ # Topic6: Confidential AI Computing ## Motivation: - -- In the training and deployment process of AI services, several vital resources such as data, models, and computing resources may belong to different parties, so a large amount of data will move across trust domains. The problems of data privacy protection and model confidentiality protection are prominent. -- Confidential computing is an important direction to protect the confidentiality of key data. At present, confidential computing based on trusted execution environment has performance advantages, but its trust model is limited; the trust model of confidential computing based on cryptography (homomorphic encryption, multi-party computing) is simple, but there is still a gap between performance and practicality. -- A series of specialized optimizations may improve the performance of confidential computing in AI scenarios, including but not limited to: cryptography suitable for AI, specialized intermediate representation and compling strategy for confidential AI computing, and hardware-based acceleration. +* In the training and deployment process of AI services, several vital resources such as data, models, and computing resources may belong to different parties. A large amount of data will move across trust domains. The problems of data privacy protection and model confidentiality protection are prominent. +* Confidential computing is an important direction to protect the confidentiality of key data. At present, confidential computing based on trusted execution environment has performance advantages, but its trust model is limited; confidential computing based on cryptography has simple trust model, but there is still a gap between performance and practicality. +* A series of specialized optimizations may improve the performance of confidential computing in AI scenarios, including but not limited to: cryptography suitable for AI, specialized intermediate representation and compiling strategy, and hardware-based acceleration. ## Target: - -​ Realize an AI on Encrypted Data & Model computing framework with feasible, flexible and efficient performance in actual AI application scenarios, or key technologies. +Realize an Confidential AI Computing framework, or its key technologies, with feasible, flexible and efficient performance in actual AI application scenarios. ## Method: +​We expect the applicant can conduct Confidential AI Computing research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. -​ We expect the applicant can conduct Confidential AI Computing research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. - -## How To Join - -1. Submit an issue/PR based on community discussion for consultation or claim on related topics -2. Submit your proposal to us by email xxx@huawei.com \ No newline at end of file +## How To Join: +* Submit an issue/PR based on community discussion for consultation or claim on related topics +* Submit your proposal to us by email wangze14@huawei.com \ No newline at end of file diff --git a/working-groups/research/Topic7_Tensor-Differentiable-Calculation-Framework.md b/working-groups/research/Topic7_Tensor-Differentiable-Calculation-Framework.md index 47b912130dde5666a3ad0083a5e521ee51816be8..cd9841615edb887780069a053d3e3b783665f450 100644 --- a/working-groups/research/Topic7_Tensor-Differentiable-Calculation-Framework.md +++ b/working-groups/research/Topic7_Tensor-Differentiable-Calculation-Framework.md @@ -1,21 +1,17 @@ -# Topic7: Tensor Differentiable Calculation Framework +# Topic7: Tensor Differentiable Computing Framework ## Motivation: - * The new network model poses challenges to the IR expression, optimization and execution of the deep learning framework, including the introduction of a new op abstraction level, and the dynamics of the model. -* Third-party high-performance computing languages or frameworks are accelerated, and there is an urgent need for a more versatile and open tensor computing framework and API design +* The acceleration of third-party high-performance computing languages ​​or frameworks urgently requires a more versatile and open tensor computing framework and API design. * The technical challenges of unified optimization of the model layer and the operator layer, including hierarchical IR design, optimization of infrastructure, automatic tuning, loop optimization, etc. * Differential equations are solved with a large number of differentials, which have high requirements for the differential expression of the framework, interface design, algorithm analysis efficiency and reliability. -##Target: - -​ Driven by cutting-edge applications, from the perspectives of new models, dynamic models, high-performance computing languages, etc., study the evolution direction and key technology paths of future computing frameworks. For example, it supports differentiable programming of high-order differentiation and is compatible with traditional Fortran/C numerical calculation framework. - -##Method: - -​ We expect the applicant can conduct tensor differentiable calculation framework research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. +## Target: +​Driven by cutting-edge applications, from the perspectives of new models, dynamic models, high-performance computing languages, etc., study the evolution direction and key technology paths of future computing frameworks. For example, it supports differentiable programming of high-order differentiation. -## How To Join +## Method: +​We expect the applicant can conduct tensor differentiable Computing framework research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. -1. Submit an issue/PR based on community discussion for consultation or claim on related topics -2. Submit your proposal to us by email xxx@huawei.com \ No newline at end of file +## How To Join: +* Submit an issue/PR based on community discussion for consultation or claim on related topics +* Submit your proposal to us by email peng.yuanpeng@huawei.com \ No newline at end of file diff --git a/working-groups/research/Topic8_Distributed-and-Parallel-AI-Computing-Framework.md b/working-groups/research/Topic8_Distributed-and-Parallel-AI-Computing-Framework.md index 2b7b21327ddf47a85664d5fef245da246727b632..91f99ab0d91441fa24ebb29e869ee8b03c931f24 100644 --- a/working-groups/research/Topic8_Distributed-and-Parallel-AI-Computing-Framework.md +++ b/working-groups/research/Topic8_Distributed-and-Parallel-AI-Computing-Framework.md @@ -1,20 +1,16 @@ # Topic8: Distributed And Parallel AI Computing Framework ## Motivation: - * The scale and complexity of models are getting higher and higher, such as GPT-3 with 175 billion parameters, millions of face recognition, and tens of billions of feature recommendations. * It is difficult to split the model manually. For example, developers need to combine information such as calculation amount, cluster size, communication bandwidth, and network topology to construct a parallel mode. * The expression of the parallel mode lacks adaptability, and the simple graph-level model segmentation cannot obtain high-efficiency speedup. It requires the decoupling of algorithm logic and parallel logic. ## Target: - -​ Driven by super-large models, research key technologies for accelerating distributed training, including but not limited to automatic parallelism, hybrid parallelism, memory optimization, and elastic scaling. Such as achieving heterogeneous automatic parallel efficiency and linear speedup. +​Driven by super-large models, research key technologies for accelerating distributed training, including but not limited to automatic parallelism, hybrid parallelism, memory optimization, and elastic scaling. Such as achieving heterogeneous automatic parallel efficiency and linear speedup. ## Method: +​We expect the applicant can conduct distributed and parallel AI computing framework research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. -​ We expect the applicant can conduct distributed and parallel AI computing framework research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. - -## How To Join - -1. Submit an issue/PR based on community discussion for consultation or claim on related topics -2. Submit your proposal to us by email xxx@huawei.com \ No newline at end of file +## How To Join: +* Submit an issue/PR based on community discussion for consultation or claim on related topics +* Submit your proposal to us by email peng.yuanpeng@huawei.com \ No newline at end of file diff --git a/working-groups/research/Topic9_Explainable-AI.md b/working-groups/research/Topic9_Explainable-AI.md index cdc3c2f6c2ef1db3e4340fe6a7457bdfdf85d20b..286f270f9b9eda7c2968f9f3b0db5f569ee05ca4 100644 --- a/working-groups/research/Topic9_Explainable-AI.md +++ b/working-groups/research/Topic9_Explainable-AI.md @@ -1,20 +1,16 @@ -# Topic9:Explainable AI +# Topic9:XAI in line with human cognitive -##Motivation: +## Motivation: +* The current deep learning model is essentially black box due to its technical complexity , which leads to the opacity and inexplicability of AI services and further restricts the commercial application and promotion of AI services. Existing explainable AI technology mainly focuses on how to provide limited engineering auxiliary information to the model, but ignores the understanding of AI models from the perspective of human cognition. +* Humans usually understand things through analogies, metaphors, induction and other cognitive methods, and have a certain process of mental cognition construction. Thus, in this project, we expect to be able to explore more systematic and explainable AI methods that conform to human cognition, including interactive interfaces, explainable methods, measurement methods, and so on. -​ The current deep learning model is essentially black box due to its technical complexity , which leads to the opacity and inexplicability of AI services and further restricts the commercial application and promotion of AI services. Existing interpretable AI technology mainly focuses on how to provide limited engineering auxiliary information to the model, but ignores the understanding of AI models from the perspective of human cognition -​ Humans usually understand things through analogies, metaphors, induction and other cognitive methods, and have a certain process of mental cognition construction. Thus, in this project, we expect to be able to explore more systematic and interpretable AI methods that conform to human cognition, including interactive interfaces, interpretation methods, measurement methods, and so on. - -##Target: - -​ A complete set of interpretable AI methods and strategies in line with human cognition, providing necessary interactive cognitive interface design solutions for different scenarios and different cognitions, and a case study for typical scenarios. +## Target: +A complete set of explainable AI methods and strategies in line with human cognition, providing necessary interactive cognitive interface design solutions for different scenarios and different cognitions, and a case study for typical scenarios. ## Method: - -​ We expect the applicant can conduct XAI research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. +​We expect the applicant can conduct XAI research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. ## How To Join: - -1. Submit an issue/PR based on community discussion for consultation or claim on related topics -2. Submit your proposal to us by email xxx@huawei.com \ No newline at end of file +* Submit an issue/PR based on community discussion for consultation or claim on related topics +* Submit your proposal to us by email roc.wangyunpeng@huawei.com \ No newline at end of file