Differential Privacy is coming! By using Differential-Privacy-Optimizers, one can still train a model as usual, while the trained model preserved the privacy of training dataset, satisfying the definition of
differential privacy with proper budget.
* Optimizers with Differential Privacy([PR23](https://gitee.com/mindspore/mindarmour/pulls/23), [PR24](https://gitee.com/mindspore/mindarmour/pulls/24))
* Some common optimizers now have a differential privacy version (SGD/
Adam). We are adding more.
* Automatically and adaptively add Gaussian Noise during training to achieve Differential Privacy.
* Automatically stop training when Differential Privacy Budget exceeds.