未验证 提交 a53976a5 编写于 作者: J jerrywgz 提交者: GitHub

refine README (#1747)

* refine README in LRC
上级 e5327e01
# LRC Local Rademachar Complexity Regularization # LRC Local Rademachar Complexity Regularization
This directory contains image classification model based on novel regularizer rooted in Local Rademacher Complexity (LRC). The regularization by LRC and [DARTS](https://arxiv.org/abs/1806.09055) are combined in this model and it achieves 97.3% accuracy on CIFAR-10 dataset. Regularization of Deep Neural Networks(DNNs) for the sake of improving their generalization capability is important and chllenging. This directory contains image classification model based on a novel regularizer rooted in Local Rademacher Complexity (LRC). We appreciate the contribution by [DARTS](https://arxiv.org/abs/1806.09055) for our research. The regularization by LRC and DARTS are combined in this model on CIFAR-10 dataset. Code accompanying the paper
> [An Empirical Study on Regularization of Deep Neural Networks by Local Rademacher Complexity](https://arxiv.org/abs/1902.00873)\
> Yingzhen Yang, Xingjian Li, Jun Huan.\
> _arXiv:1902.00873_.
--- ---
# Table of Contents # Table of Contents
...@@ -7,7 +10,6 @@ This directory contains image classification model based on novel regularizer ro ...@@ -7,7 +10,6 @@ This directory contains image classification model based on novel regularizer ro
- [Installation](#installation) - [Installation](#installation)
- [Data preparation](#data-preparation) - [Data preparation](#data-preparation)
- [Training](#training) - [Training](#training)
- [Model performances](#model-performances)
## Installation ## Installation
...@@ -66,9 +68,7 @@ After data preparation, one can start the training step by: ...@@ -66,9 +68,7 @@ After data preparation, one can start the training step by:
* Initalize bias in batch norm and fc to zero constant and do not add bias to conv2d. * Initalize bias in batch norm and fc to zero constant and do not add bias to conv2d.
## Model performances ## Reference
Below is the accuracy on CIFAR-10 dataset:
| model | avg top1 | avg top5 | - DARTS: Differentiable Architecture Search [`paper`](https://arxiv.org/abs/1806.09055)
| ----- | -------- | -------- | - Differentiable architecture search in PyTorch [`code`](https://github.com/quark0/darts)
| [DARTS-LRC](https://paddlemodels.bj.bcebos.com/autodl/fluid_rademacher.tar.gz) | 97.34 | 99.75 |
# LRC 局部Rademachar复杂度正则化 # LRC 局部Rademachar复杂度正则化
本目录包括了一种基于局部rademacher复杂度的新型正则(LRC)的图像分类模型。该模型将LRC正则和[DARTS](https://arxiv.org/abs/1806.09055)网络相结合,在CIFAR-10数据集中得到了97.3%的准确率。 为了在深度神经网络中提升泛化能力,正则化的选择十分重要也具有挑战性。本目录包括了一种基于局部rademacher复杂度的新型正则(LRC)的图像分类模型。十分感谢[DARTS](https://arxiv.org/abs/1806.09055)模型对本研究提供的帮助。该模型将LRC正则和DARTS网络相结合,在CIFAR-10数据集中得到了很出色的效果。代码和文章一同发布
> [An Empirical Study on Regularization of Deep Neural Networks by Local Rademacher Complexity](https://arxiv.org/abs/1902.00873)\
> Yingzhen Yang, Xingjian Li, Jun Huan.\
> _arXiv:1902.00873_.
--- ---
# 内容 # 内容
...@@ -7,7 +10,6 @@ ...@@ -7,7 +10,6 @@
- [安装](#安装) - [安装](#安装)
- [数据准备](#数据准备) - [数据准备](#数据准备)
- [模型训练](#模型训练) - [模型训练](#模型训练)
- [模型性能](#模型性能)
## 安装 ## 安装
...@@ -63,9 +65,7 @@ ...@@ -63,9 +65,7 @@
* 对batch norm和全连接层偏差采用固定初始化,不对卷积设置偏差 * 对batch norm和全连接层偏差采用固定初始化,不对卷积设置偏差
## 模型性能 ## 引用
下表为该模型在CIFAR-10数据集上的性能:
| 模型 | 平均top1 | 平均top5 | - DARTS: Differentiable Architecture Search [`论文`](https://arxiv.org/abs/1806.09055)
| ----- | -------- | -------- | - Differentiable Architecture Search in PyTorch [`代码`](https://github.com/quark0/darts)
| [DARTS-LRC](https://paddlemodels.bj.bcebos.com/autodl/fluid_rademacher.tar.gz) | 97.34 | 99.75 |
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册