提交 3561f254 编写于 作者: Z ZhidanLiu

update tutorial of differential privacy

上级 e1ef0b66
...@@ -86,25 +86,25 @@ TAG = 'Lenet5_train' ...@@ -86,25 +86,25 @@ TAG = 'Lenet5_train'
```python ```python
cfg = edict({ cfg = edict({
'device_target': 'Ascend', # device used 'num_classes': 10, # the number of classes of model's output
'data_path': './MNIST_unzip', # the path of training and testing data set 'lr': 0.1, # the learning rate of model's optimizer
'dataset_sink_mode': False, # whether deliver all training data to device one time  'momentum': 0.9, # the momentum value of model's optimizer
'num_classes': 10, # the number of classes of model's output 'epoch_size': 10, # training epochs
'lr': 0.01, # the learning rate of model's optimizer 'batch_size': 256, # batch size for training
'momentum': 0.9, # the momentum value of model's optimizer 'image_height': 32, # the height of training samples
'epoch_size': 10, # training epochs 'image_width': 32, # the width of training samples
'batch_size': 256, # batch size for training 'save_checkpoint_steps': 234, # the interval steps for saving checkpoint file of the model
'image_height': 32, # the height of training samples 'keep_checkpoint_max': 10, # the maximum number of checkpoint files would be saved
'image_width': 32, # the width of training samples 'device_target': 'Ascend', # device used
'save_checkpoint_steps': 234, # the interval steps for saving checkpoint file of the model 'data_path': './MNIST_unzip', # the path of training and testing data set
'keep_checkpoint_max': 10, # the maximum number of checkpoint files would be saved 'dataset_sink_mode': False, # whether deliver all training data to device one time
'micro_batches': 32, # the number of small batches split from an original batch 'micro_batches': 16, # the number of small batches split from an original batch
'l2_norm_bound': 1.0, # the clip bound of the gradients of model's training parameters 'norm_clip': 1.0, # the clip bound of the gradients of model's training parameters
'initial_noise_multiplier': 1.5, # the initial multiplication coefficient of the noise added to training 'initial_noise_multiplier': 1.5, # the initial multiplication coefficient of the noise added to training
# parameters' gradients # parameters' gradients
'mechanisms': 'AdaGaussian', # the method of adding noise in gradients while training 'mechanisms': 'AdaGaussian', # the method of adding noise in gradients while training
'optimizer': 'Momentum' # the base optimizer used for Differential privacy training 'optimizer': 'Momentum' # the base optimizer used for Differential privacy training
}) })
``` ```
2. Configure necessary information, including the environment information and execution mode. 2. Configure necessary information, including the environment information and execution mode.
...@@ -320,13 +320,13 @@ ds_train = generate_mnist_dataset(os.path.join(cfg.data_path, "train"), ...@@ -320,13 +320,13 @@ ds_train = generate_mnist_dataset(os.path.join(cfg.data_path, "train"),
5. Display the result. 5. Display the result.
The accuracy of the LeNet model without differential privacy is 99%, and the accuracy of the LeNet model with adaptive differential privacy AdaDP is 91%. The accuracy of the LeNet model without differential privacy is 99%, and the accuracy of the LeNet model with adaptive differential privacy AdaDP is 98%.
``` ```
============== Starting Training ============== ============== Starting Training ==============
... ...
============== Starting Testing ============== ============== Starting Testing ==============
... ...
============== Accuracy: 0.9115 ============== ============== Accuracy: 0.9879 ==============
``` ```
### References ### References
......
...@@ -72,25 +72,25 @@ TAG = 'Lenet5_train' ...@@ -72,25 +72,25 @@ TAG = 'Lenet5_train'
```python ```python
cfg = edict({ cfg = edict({
'device_target': 'Ascend', # device used 'num_classes': 10, # the number of classes of model's output
'data_path': './MNIST_unzip', # the path of training and testing data set 'lr': 0.1, # the learning rate of model's optimizer
'dataset_sink_mode': False, # whether deliver all training data to device one time  'momentum': 0.9, # the momentum value of model's optimizer
'num_classes': 10, # the number of classes of model's output 'epoch_size': 10, # training epochs
'lr': 0.01, # the learning rate of model's optimizer 'batch_size': 256, # batch size for training
'momentum': 0.9, # the momentum value of model's optimizer 'image_height': 32, # the height of training samples
'epoch_size': 10, # training epochs 'image_width': 32, # the width of training samples
'batch_size': 256, # batch size for training 'save_checkpoint_steps': 234, # the interval steps for saving checkpoint file of the model
'image_height': 32, # the height of training samples 'keep_checkpoint_max': 10, # the maximum number of checkpoint files would be saved
'image_width': 32, # the width of training samples 'device_target': 'Ascend', # device used
'save_checkpoint_steps': 234, # the interval steps for saving checkpoint file of the model 'data_path': './MNIST_unzip', # the path of training and testing data set
'keep_checkpoint_max': 10, # the maximum number of checkpoint files would be saved 'dataset_sink_mode': False, # whether deliver all training data to device one time
'micro_batches': 32, # the number of small batches split from an original batch 'micro_batches': 16, # the number of small batches split from an original batch
'l2_norm_bound': 1.0, # the clip bound of the gradients of model's training parameters 'norm_clip': 1.0, # the clip bound of the gradients of model's training parameters
'initial_noise_multiplier': 1.5, # the initial multiplication coefficient of the noise added to training 'initial_noise_multiplier': 1.5, # the initial multiplication coefficient of the noise added to training
# parameters' gradients # parameters' gradients
'mechanisms': 'AdaGaussian', # the method of adding noise in gradients while training 'mechanisms': 'AdaGaussian', # the method of adding noise in gradients while training
'optimizer': 'Momentum' # the base optimizer used for Differential privacy training 'optimizer': 'Momentum' # the base optimizer used for Differential privacy training
}) })
``` ```
2. 配置必要的信息,包括环境信息、执行的模式。 2. 配置必要的信息,包括环境信息、执行的模式。
...@@ -306,13 +306,13 @@ ds_train = generate_mnist_dataset(os.path.join(cfg.data_path, "train"), ...@@ -306,13 +306,13 @@ ds_train = generate_mnist_dataset(os.path.join(cfg.data_path, "train"),
5. 结果展示。 5. 结果展示。
不加差分隐私的LeNet模型精度稳定在99%,加了自适应差分隐私AdaDP的LeNet模型收敛,精度稳定在91%。 不加差分隐私的LeNet模型精度稳定在99%,加了自适应差分隐私AdaDP的LeNet模型收敛,精度稳定在98%。
``` ```
============== Starting Training ============== ============== Starting Training ==============
... ...
============== Starting Testing ============== ============== Starting Testing ==============
... ...
============== Accuracy: 0.9115 ============== ============== Accuracy: 0.9879 ==============
``` ```
### 引用 ### 引用
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册