* TinyBert: a smaller and faster version of BERT using transformer distillation for natural language understanding on GLUE benchmark.
* SE-ResNet50: add Squeeze-and-Excitation blocks(SE-Blocks) to the resnet50 network to improve channel interdependencies for image classification on ImageNet 2012 dataset.
* Inception V3: the third version of Inception convolutional architectures for image classification on ImageNet 2012 dataset.
* Frontend and user interface
* Embedding operator high-level packaging to support segmented by field for Wide&Deep.
* Load multi-node checkpoint into single-process to support host-device hybrid inference.
* Support Concat/Tile/Strideslice distributed operators.
* Support cumulative gradient and batch training split.
* Support variable parameter input for Cell object.
* Parameter mixed calculation optimization for pynative mode.
* Deep Probabilistic Programming
* Support statistical distributions classes used to generate stochastic tensors.
* Support probabilistic inference algorithms.
* Support BNN layers used to construct BNN in Graph mode.
* Support interfaces for the transformation between BNN and DNN in Graph mode.
* Support uncertainty estimation to estimate epistemic uncertainty and aleatoric uncertainty.
* User interfaces change log
* change base class of parameter([!3473](https://gitee.com/mindspore/mindspore/pulls/3473))
* change binary to mindir([!4258](https://gitee.com/mindspore/mindspore/pulls/4258))
* change export from geir to air([!4269](https://gitee.com/mindspore/mindspore/pulls/4269))
* Init parameter data by default([!3967](https://gitee.com/mindspore/mindspore/pulls/3967))
* change IndexedSlices to RowTensor([!4031](https://gitee.com/mindspore/mindspore/pulls/4031))
* Must set or change parallel mode before any Initializer created([!4801](https://gitee.com/mindspore/mindspore/pulls/4801))
* Executor and performance optimization
* Minspore graph compilation process performance improved by 20%.
* Decoupling C++ and Python modules to achieve separate compilation of core modules.
* fix bug of cast dtype when using mix_presion in pynative mode([!3730](https://gitee.com/mindspore/mindspore/pulls/3730))
* Executor
* fix etsnet train error when UnsegmentSum's first input shape is (1,) ([!4573](https://gitee.com/mindspore/mindspore/pulls/4573))
* fix bug of result error in while control flow because of unsupporting for value reference ([!4103](https://gitee.com/mindspore/mindspore/pulls/4103))
* fix bug of the output tensor does not carry device data type ([!3774](https://gitee.com/mindspore/mindspore/pulls/3774))
* fix bug of avoiding multi attr value are eliminated in pynative mode ([!4225](https://gitee.com/mindspore/mindspore/pulls/4225))
* fix bug of AssignAdd unable to work normally in multi-cases ([!5171](https://gitee.com/mindspore/mindspore/pulls/5171))
* GPU platform
* improve the environment variable checking for nvcc compiler path ([!5140](https://gitee.com/mindspore/mindspore/pulls/5140))
* fix bug of error in cast operator conversion from fp16 to fp32 ([!4147](https://gitee.com/mindspore/mindspore/pulls/4147))
* fix bug of the array out of bound in case of make_tuple operator ([!5219](https://gitee.com/mindspore/mindspore/pulls/5219))
* Data processing and Pro
* fix GeneratorDataset time out([!3624](https://gitee.com/mindspore/mindspore/pulls/3624))
@@ -207,11 +317,11 @@ Contributions of any kind are welcome!
* Executor
* Fix dropout,topK and addn errors in PyNative mode ([!1285](https://gitee.com/mindspore/mindspore/pulls/1285), [!1138](https://gitee.com/mindspore/mindspore/pulls/1138), [!1033](https://gitee.com/mindspore/mindspore/pulls/1033)).
* Fix memory leaks after execution in PyNatvie mode ([!1201](https://gitee.com/mindspore/mindspore/pulls/1201)).
* Fix HCCL failure in some special scenes ([!1204](https://gitee.com/mindspore/dashboard/projects/mindspore/mindspore/pulls/1204), [!1252](https://gitee.com/mindspore/dashboard/projects/mindspore/mindspore/pulls/1252)).
* Fix Topk operator selection strategy bug between aicore and aicpu([!1367](https://gitee.com/mindspore/dashboard/projects/mindspore/mindspore/pulls/1367)).
* Fix input memory size of 'assign' op unequal in control sink mode when assigning a data from one child graph to another child graph([!802](https://gitee.com/mindspore/dashboard/projects/mindspore/mindspore/pulls/802)).
* Fix allreduce ir inconsistency([!989](https://gitee.com/mindspore/dashboard/projects/mindspore/mindspore/pulls/989)).
* Fix HCCL failure in some special scenes ([!1204](https://gitee.com/mindspore/mindspore/pulls/1204), [!1252](https://gitee.com/mindspore/mindspore/pulls/1252)).
* Fix Topk operator selection strategy bug between aicore and aicpu([!1367](https://gitee.com/mindspore/mindspore/pulls/1367)).
* Fix input memory size of 'assign' op unequal in control sink mode when assigning a data from one child graph to another child graph([!802](https://gitee.com/mindspore/mindspore/pulls/802)).
* Fix allreduce ir inconsistency([!989](https://gitee.com/mindspore/mindspore/pulls/989)).
* GPU platform
* Fix summary for gradient collection ([!1364](https://gitee.com/mindspore/mindspore/pulls/1364))
* Fix the slice operator ([!1489](https://gitee.com/mindspore/mindspore/pulls/1489))