diff --git a/example/mobilenetv2/Readme.md b/example/mobilenetv2/Readme.md index d3dedb3cb846d9eabb3a8e5ca468f62e89f9acf7..4a1b8c26a1e50b82e9726fae27889308bc834aee 100644 --- a/example/mobilenetv2/Readme.md +++ b/example/mobilenetv2/Readme.md @@ -4,7 +4,7 @@ MobileNetV2 is a significant improvement over MobileNetV1 and pushes the state o MobileNetV2 builds upon the ideas from MobileNetV1, using depthwise separable convolution as efficient building blocks. However, V2 introduces two new features to the architecture: 1) linear bottlenecks between the layers, and 2) shortcut connections between the bottlenecks1. -[Paper](https://arxiv.org/pdf/1801.04381) Howard, Andrew, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang et al. "Searching for MobileNetV2." In Proceedings of the IEEE International Conference on Computer Vision, pp. 1314-1324. 2019. +[Paper](https://arxiv.org/pdf/1801.04381) Sandler, Mark, et al. "Mobilenetv2: Inverted residuals and linear bottlenecks." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018. # Dataset diff --git a/example/mobilenetv2_quant/Readme.md b/example/mobilenetv2_quant/Readme.md index 01cc7e360f6ed12f6adadada24f1b4e9f3b88689..361b74dd121fe9547c6fce74496616213b66cf1c 100644 --- a/example/mobilenetv2_quant/Readme.md +++ b/example/mobilenetv2_quant/Readme.md @@ -4,7 +4,7 @@ MobileNetV2 is a significant improvement over MobileNetV1 and pushes the state o MobileNetV2 builds upon the ideas from MobileNetV1, using depthwise separable convolution as efficient building blocks. However, V2 introduces two new features to the architecture: 1) linear bottlenecks between the layers, and 2) shortcut connections between the bottlenecks1. -[Paper](https://arxiv.org/pdf/1801.04381) Howard, Andrew, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang et al. "Searching for MobileNetV2." In Proceedings of the IEEE International Conference on Computer Vision, pp. 1314-1324. 2019. +[Paper](https://arxiv.org/pdf/1801.04381) Sandler, Mark, et al. "Mobilenetv2: Inverted residuals and linear bottlenecks." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018. # Dataset @@ -16,7 +16,6 @@ Dataset used: imagenet - Data format: RGB images. - Note: Data will be processed in src/dataset.py - # Environment Requirements - Hardware(Ascend) @@ -48,6 +47,8 @@ Dataset used: imagenet ├── eval.py ``` +Notation: Current hyperparameters only test on 4 cards while training, if want to use 8 cards for training, should change parameters like learning rate in 'src/config.py'. + ## Training process ### Usage