提交 32a470fd 编写于 作者: C chengxianbin

modiy ssd&yolov3-resnet18 net README.md

上级 5fc18b98
# SSD Example
# Contents
- [SSD Description](#ssd-description)
- [Model Architecture](#model-architecture)
- [Dataset](#dataset)
- [Environment Requirements](#environment-requirements)
- [Quick Start](#quick-start)
- [Script Description](#script-description)
- [Script and Sample Code](#script-and-sample-code)
- [Script Parameters](#script-parameters)
- [Training Process](#training-process)
- [Training](#training)
- [Evaluation Process](#evaluation-process)
- [Evaluation](#evaluation)
- [Model Description](#model-description)
- [Performance](#performance)
- [Evaluation Performance](#evaluation-performance)
- [Inference Performance](#evaluation-performance)
- [Description of Random Situation](#description-of-random-situation)
- [ModelZoo Homepage](#modelzoo-homepage)
# [SSD Description](#contents)
SSD discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape.Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes.
[Paper](https://arxiv.org/abs/1512.02325): Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg.European Conference on Computer Vision (ECCV), 2016 (In press).
# [Model Architecture](#contents)
## Description
The SSD approach is based on a feed-forward convolutional network that produces a fixed-size collection of bounding boxes and scores for the presence of object class instances in those boxes, followed by a non-maximum suppression step to produce the final detections. The early network layers are based on a standard architecture used for high quality image classification, which is called the base network. Then add auxiliary structure to the network to produce detections.
SSD network based on MobileNetV2, with support for training and evaluation.
# [Dataset](#contents)
Dataset used: [COCO2017](<http://images.cocodataset.org/>)
## Requirements
- Dataset size:19G
- Train:18G,118000 images
- Val:1G,5000 images
- Annotations:241M,instances,captions,person_keypoints etc
- Data format:image and json files
- Note:Data will be processed in dataset.py
# [Environment Requirements](#contents)
- Install [MindSpore](https://www.mindspore.cn/install/en).
- Dataset
- Download the dataset COCO2017.
We use coco2017 as training dataset in this example by default, and you can also use your own datasets.
- We use COCO2017 as training dataset in this example by default, and you can also use your own datasets.
1. If coco dataset is used. **Select dataset to coco when run script.**
Install Cython and pycocotool.
Install Cython and pycocotool, and you can also install mmcv to process data.
```
pip install Cython
pip install pycocotools
```
And change the coco_root and other settings you need in `config.py`. The directory structure is as follows:
```
And change the COCO_ROOT and other settings you need in `config.py`. The directory structure is as follows:
```
.
└─cocodataset
......@@ -31,6 +67,7 @@ SSD network based on MobileNetV2, with support for training and evaluation.
└─instance_val2017.json
├─val2017
└─train2017
```
2. If your own dataset is used. **Select dataset to other when run script.**
......@@ -38,37 +75,96 @@ SSD network based on MobileNetV2, with support for training and evaluation.
```
train2017/0000001.jpg 0,259,401,459,7 35,28,324,201,2 0,30,59,80,2
```
Each row is an image annotation which split by space, the first column is a relative path of image, the others are box and class infomations of the format [xmin,ymin,xmax,ymax,class]. We read image from an image path joined by the `image_dir`(dataset directory) and the relative path in `anno_path`(the TXT file path), `image_dir` and `anno_path` are setting in `config.py`.
Each row is an image annotation which split by space, the first column is a relative path of image, the others are box and class infomations of the format [xmin,ymin,xmax,ymax,class]. We read image from an image path joined by the `IMAGE_DIR`(dataset directory) and the relative path in `ANNO_PATH`(the TXT file path), `IMAGE_DIR` and `ANNO_PATH` are setting in `config.py`.
# [Quick Start](#contents)
## Running the example
After installing MindSpore via the official website, you can start training and evaluation on Ascend as follows:
### Training
```
# distributed training on Ascend
sh run_distribute_train.sh [DEVICE_NUM] [EPOCH_SIZE] [LR] [DATASET] [RANK_TABLE_FILE]
To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/tutorial/en/master/use/data_preparation/converting_datasets.html) files by `coco_root`(coco dataset) or `iamge_dir` and `anno_path`(own dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.**
# run eval on Ascend
sh run_eval.sh [DATASET] [CHECKPOINT_PATH] [DEVICE_ID]
```
# [Script Description](#contents)
## [Script and Sample Code](#contents)
```shell
.
└─ model_zoo
└─ ssd
├─ README.md ## descriptions about SSD
├─ scripts
└─ run_distribute_train.sh ## shell script for distributed on ascend
└─ run_eval.sh ## shell script for eval on ascend
├─ src
├─ __init__.py ## init file
├─ box_util.py ## bbox utils
├─ coco_eval.py ## coco metrics utils
├─ config.py ## total config
├─ dataset.py ## create dataset and process dataset
├─ init_params.py ## parameters utils
├─ lr_schedule.py ## learning ratio generator
└─ ssd.py ## ssd architecture
├─ eval.py ## eval scripts
└─ train.py ## train scripts
```
- Stand alone mode
## [Script Parameters](#contents)
```
python train.py --dataset coco
```
Major parameters in train.py and config.py as follows:
```
"device_num": 1 # Use device nums
"lr": 0.05 # Learning rate init value
"dataset": coco # Dataset name
"epoch_size": 500 # Epoch size
"batch_size": 32 # Batch size of input tensor
"pre_trained": None # Pretrained checkpoint file path
"pre_trained_epoch_size": 0 # Pretrained epoch size
"save_checkpoint_epochs": 10 # The epoch interval between two checkpoints. By default, the checkpoint will be saved per 10 epochs
"loss_scale": 1024 # Loss scale
You can run ```python train.py -h``` to get more information.
"class_num": 81 # Dataset class number
"image_shape": [300, 300] # Image height and width used as input to the model
"mindrecord_dir": "/data/MindRecord_COCO" # MindRecord path
"coco_root": "/data/coco2017" # COCO2017 dataset path
"voc_root": "" # VOC original dataset path
"image_dir": "" # Other dataset image path, if coco or voc used, it will be useless
"anno_path": "" # Other dataset annotation path, if coco or voc used, it will be useless
```
- Distribute mode
```
sh run_distribute_train.sh 8 500 0.2 coco /data/hccl.json
```
## [Training Process](#contents)
The input parameters are device numbers, epoch size, learning rate, dataset mode and [hccl json configuration file](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools). **It is better to use absolute path.**
### Training on Ascend
You will get the loss value of each step as following:
To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/tutorial/en/master/use/data_preparation/converting_datasets.html) files by `coco_root`(coco dataset) or `iamge_dir` and `anno_path`(own dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.**
- Distribute mode
```
sh run_distribute_train.sh [DEVICE_NUM] [EPOCH_SIZE] [LR] [DATASET] [RANK_TABLE_FILE] [PRE_TRAINED](optional) [PRE_TRAINED_EPOCH_SIZE](optional)
```
We need five or seven parameters for this scripts.
- `DEVICE_NUM`: the device number for distributed train.
- `EPOCH_NUM`: epoch num for distributed train.
- `LR`: learning rate init value for distributed train.
- `DATASET`:the dataset mode for distributed train.
- `RANK_TABLE_FILE :` the path of [rank_table.json](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools), it is better to use absolute path.
- `PRE_TRAINED :` the path of pretrained checkpoint file, it is better to use absolute path.
- `PRE_TRAINED_EPOCH_SIZE :` the epoch num of pretrained.
Training result will be stored in the current path, whose folder name begins with "LOG". Under this, you can find checkpoint file together with result like the followings in log
```
epoch: 1 step: 458, loss is 3.1681802
......@@ -87,33 +183,81 @@ epoch: 500 step: 458, loss is 0.5548882
epoch time: 39064.8467540741, per step time: 85.29442522723602
```
### Evaluation
## [Evaluation Process](#contents)
for evaluation , run `eval.py` with `checkpoint_path`. `checkpoint_path` is the path of [checkpoint](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html) file.
### Evaluation on Ascend
```
python eval.py --checkpoint_path ssd.ckpt --dataset coco
sh run_eval.sh [DATASET] [CHECKPOINT_PATH] [DEVICE_ID]
```
We need two parameters for this scripts.
- `DATASET`:the dataset mode of evaluation dataset.
- `CHECKPOINT_PATH`: the absolute path for checkpoint file.
- `DEVICE_ID`: the device id for eval.
You can run ```python eval.py -h``` to get more information.
You will get the result as following:
> checkpoint can be produced in training process.
Inference result will be stored in the example path, whose folder name begins with "eval". Under this, you can find result like the followings in log.
```
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.189
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.341
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.183
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.040
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.181
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.326
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.213
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.348
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.380
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.124
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.412
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.588
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.238
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.400
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.240
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.039
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.198
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.438
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.250
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.389
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.424
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.122
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.434
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.697
========================================
mAP: 0.18937438355383837
mAP: 0.23808886505483504
```
# [Model Description](#contents)
## [Performance](#contents)
### Evaluation Performance
| Parameters | Ascend |
| -------------------------- | -------------------------------------------------------------|
| Model Version | V1 |
| Resource | Ascend 910 ;CPU 2.60GHz,56cores;Memory,314G |
| uploaded Date | 06/01/2020 (month/day/year) |
| MindSpore Version | 0.3.0-alpha |
| Dataset | COCO2017 |
| Training Parameters | epoch = 500, batch_size = 32 |
| Optimizer | Momentum |
| Loss Function | Sigmoid Cross Entropy,SmoothL1Loss |
| Speed | 8pcs: 90ms/step |
| Total time | 8pcs: 4.81hours |
| Parameters (M) | 34 |
| Scripts | https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/ssd |
### Inference Performance
| Parameters | Ascend |
| ------------------- | ----------------------------|
| Model Version | V1 |
| Resource | Ascend 910 |
| Uploaded Date | 06/01/2020 (month/day/year) |
| MindSpore Version | 0.3.0-alpha |
| Dataset | COCO2017 |
| batch_size | 1 |
| outputs | mAP |
| Accuracy | IoU=0.50: 23.8% |
| Model for inference | 34M(.ckpt file) |
# [Description of Random Situation](#contents)
In dataset.py, we set the seed inside “create_dataset" function. We also use random seed in train.py.
# [ModelZoo Homepage](#contents)
Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).
#!/bin/bash
# Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
if [ $# != 3 ]
then
echo "Usage: sh run_eval.sh [DATASET] [CHECKPOINT_PATH] [DEVICE_ID]"
exit 1
fi
get_real_path(){
if [ "${1:0:1}" == "/" ]; then
echo "$1"
else
echo "$(realpath -m $PWD/$1)"
fi
}
DATASET=$1
CHECKPOINT_PATH=$(get_real_path $2)
echo $DATASET
echo $CHECKPOINT_PATH
if [ ! -f $CHECKPOINT_PATH ]
then
echo "error: CHECKPOINT_PATH=$PATH2 is not a file"
exit 1
fi
export DEVICE_NUM=1
export DEVICE_ID=$3
export RANK_SIZE=$DEVICE_NUM
export RANK_ID=0
BASE_PATH=$(cd "`dirname $0`" || exit; pwd)
cd $BASE_PATH/../ || exit
if [ -d "eval$3" ];
then
rm -rf ./eval$3
fi
mkdir ./eval$3
cp ./*.py ./eval$3
cp -r ./src ./eval$3
cd ./eval$3 || exit
env > env.log
echo "start infering for device $DEVICE_ID"
python eval.py \
--dataset=$DATASET \
--checkpoint_path=$CHECKPOINT_PATH \
--device_id=$3 > log.txt 2>&1 &
cd ..
......@@ -44,7 +44,7 @@ def main():
parser.add_argument("--batch_size", type=int, default=32, help="Batch size, default is 32.")
parser.add_argument("--pre_trained", type=str, default=None, help="Pretrained Checkpoint file path.")
parser.add_argument("--pre_trained_epoch_size", type=int, default=0, help="Pretrained epoch size.")
parser.add_argument("--save_checkpoint_epochs", type=int, default=10, help="Save checkpoint epochs, default is 5.")
parser.add_argument("--save_checkpoint_epochs", type=int, default=10, help="Save checkpoint epochs, default is 10.")
parser.add_argument("--loss_scale", type=int, default=1024, help="Loss scale, default is 1024.")
parser.add_argument("--filter_weight", type=bool, default=False, help="Filter weight parameters, default is False.")
args_opt = parser.parse_args()
......
# YOLOv3 Example
## Description
# Contents
- [YOLOv3_ResNet18 Description](#yolov3_resnet18-description)
- [Model Architecture](#model-architecture)
- [Dataset](#dataset)
- [Environment Requirements](#environment-requirements)
- [Quick Start](#quick-start)
- [Script Description](#script-description)
- [Script and Sample Code](#script-and-sample-code)
- [Script Parameters](#script-parameters)
- [Training Process](#training-process)
- [Training](#training)
- [Evaluation Process](#evaluation-process)
- [Evaluation](#evaluation)
- [Model Description](#model-description)
- [Performance](#performance)
- [Evaluation Performance](#evaluation-performance)
- [Inference Performance](#evaluation-performance)
- [Description of Random Situation](#description-of-random-situation)
- [ModelZoo Homepage](#modelzoo-homepage)
# [YOLOv3_ResNet18 Description](#contents)
YOLOv3 network based on ResNet-18, with support for training and evaluation.
## Requirements
[Paper](https://arxiv.org/abs/1804.02767): Joseph Redmon, Ali Farhadi. arXiv preprint arXiv:1804.02767, 2018. 2, 4, 7, 11.
- Install [MindSpore](https://www.mindspore.cn/install/en).
# [Model Architecture](#contents)
- Dataset
The overall network architecture of YOLOv3 is show below:
And we use ResNet18 as the backbone of YOLOv3_ResNet18. The architecture of ResNet18 has 4 stages. The ResNet architecture performs the initial convolution and max-pooling using 7×7 and 3×3 kernel sizes respectively. Afterward, every stage of the network has different Residual blocks (2, 2, 2, 2) containing two 3×3 conv layers. Finally, the network has an Average Pooling layer followed by a fully connected layer.
# [Dataset](#contents)
Dataset used: [COCO2017](<http://images.cocodataset.org/>)
We use coco2017 as training dataset.
- Dataset size:19G
- Train:18G,118000 images
- Val:1G,5000 images
- Annotations:241M,instances,captions,person_keypoints etc
- Data format:image and json files
- Note:Data will be processed in dataset.py
- Dataset
1. The directory structure is as follows:
> ```
......@@ -29,17 +63,84 @@ YOLOv3 network based on ResNet-18, with support for training and evaluation.
Each row is an image annotation which split by space, the first column is a relative path of image, the others are box and class infomations of the format [xmin,ymin,xmax,ymax,class]. `dataset.py` is the parsing script, we read image from an image path joined by the `image_dir`(dataset directory) and the relative path in `anno_path`(the TXT file path), `image_dir` and `anno_path` are external inputs.
## Running the Example
# [Environment Requirements](#contents)
- Hardware(Ascend)
- Prepare hardware environment with Ascend processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources.
- Framework
- [MindSpore](https://www.mindspore.cn/install/en)
- For more information, please check the resources below:
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
- [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html)
# [Quick Start](#contents)
After installing MindSpore via the official website, you can start training and evaluation on Ascend as follows:
- runing on Ascend
```shell script
#run standalone training example
sh run_standalone_train.sh [DEVICE_ID] [EPOCH_SIZE] [MINDRECORD_DIR] [IMAGE_DIR] [ANNO_PATH]
#run distributed training example
sh run_distribute_train.sh [DEVICE_NUM] [EPOCH_SIZE] [MINDRECORD_DIR] [IMAGE_DIR] [ANNO_PATH] [RANK_TABLE_FILE]
#run evaluation example
sh run_eval.sh [DEVICE_ID] [CKPT_PATH] [MINDRECORD_DIR] [IMAGE_DIR] [ANNO_PATH]
```
# [Script Description](#contents)
## [Script and Sample Code](#contents)
```
└── model_zoo
├── README.md // descriptions about all the models
└── yolov3_resnet18
├── README.md // descriptions about yolov3_resnet18
├── scripts
├── run_distribute_train.sh // shell script for distributed on Ascend
├── run_standalone_train.sh // shell script for distributed on Ascend
└── run_eval.sh // shell script for evaluation on Ascend
├── src
├── dataset.py // creating dataset
├── yolov3.py // yolov3 architecture
├── config.py // parameter configuration
└── utils.py // util function
├── train.py // training script
└── eval.py // evaluation script
```
## [Script Parameters](#contents)
```
Major parameters in train.py and config.py as follows:
evice_num: Use device nums, default is 1.
lr: Learning rate, default is 0.001.
epoch_size: Epoch size, default is 10.
batch_size: Batch size, default is 32.
pre_trained: Pretrained Checkpoint file path.
pre_trained_epoch_size: Pretrained epoch size.
mindrecord_dir: Mindrecord directory.
image_dir: Dataset path.
anno_path: Annotation path.
img_shape: Image height and width used as input to the model.
```
### Training
## [Training Process](#contents)
### Training on Ascend
To train the model, run `train.py` with the dataset `image_dir`, `anno_path` and `mindrecord_dir`. If the `mindrecord_dir` is empty, it wil generate [mindrecord](https://www.mindspore.cn/tutorial/en/master/use/data_preparation/converting_datasets.html) file by `image_dir` and `anno_path`(the absolute image path is joined by the `image_dir` and the relative path in `anno_path`). **Note if `mindrecord_dir` isn't empty, it will use `mindrecord_dir` rather than `image_dir` and `anno_path`.**
- Stand alone mode
```
sh run_standalone_train.sh 0 50 ./Mindrecord_train ./dataset ./dataset/train.txt
```
The input variables are device id, epoch size, mindrecord directory path, dataset directory path and train TXT file path.
......@@ -55,40 +156,84 @@ To train the model, run `train.py` with the dataset `image_dir`, `anno_path` and
You will get the loss value and time of each step as following:
```
epoch: 145 step: 156, loss is 12.202981
epoch time: 25599.22742843628, per step time: 164.0976117207454
epoch: 146 step: 156, loss is 16.91706
epoch time: 23199.971675872803, per step time: 148.7177671530308
epoch: 147 step: 156, loss is 13.04007
epoch time: 23801.95164680481, per step time: 152.57661312054364
epoch: 148 step: 156, loss is 10.431475
epoch time: 23634.241580963135, per step time: 151.50154859591754
epoch: 149 step: 156, loss is 14.665991
epoch time: 24118.8325881958, per step time: 154.60790120638333
epoch: 150 step: 156, loss is 10.779521
epoch time: 25319.57221031189, per step time: 162.30495006610187
```
```
epoch: 145 step: 156, loss is 12.202981
epoch time: 25599.22742843628, per step time: 164.0976117207454
epoch: 146 step: 156, loss is 16.91706
epoch time: 23199.971675872803, per step time: 148.7177671530308
epoch: 147 step: 156, loss is 13.04007
epoch time: 23801.95164680481, per step time: 152.57661312054364
epoch: 148 step: 156, loss is 10.431475
epoch time: 23634.241580963135, per step time: 151.50154859591754
epoch: 149 step: 156, loss is 14.665991
epoch time: 24118.8325881958, per step time: 154.60790120638333
epoch: 150 step: 156, loss is 10.779521
epoch time: 25319.57221031189, per step time: 162.30495006610187
```
Note the results is two-classification(person and face) used our own annotations with coco2017, you can change `num_classes` in `config.py` to train your dataset. And we will suport 80 classifications in coco2017 the near future.
### Evaluation
## [Evaluation Process](#contents)
### Evaluation on Ascend
To eval, run `eval.py` with the dataset `image_dir`, `anno_path`(eval txt), `mindrecord_dir` and `ckpt_path`. `ckpt_path` is the path of [checkpoint](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html) file.
```
sh run_eval.sh 0 yolo.ckpt ./Mindrecord_eval ./dataset ./dataset/eval.txt
```
```
sh run_eval.sh 0 yolo.ckpt ./Mindrecord_eval ./dataset ./dataset/eval.txt
```
The input variables are device id, checkpoint path, mindrecord directory path, dataset directory path and train TXT file path.
You will get the precision and recall value of each class:
```
class 0 precision is 88.18%, recall is 66.00%
class 1 precision is 85.34%, recall is 79.13%
```
```
class 0 precision is 88.18%, recall is 66.00%
class 1 precision is 85.34%, recall is 79.13%
```
Note the precision and recall values are results of two-classification(person and face) used our own annotations with coco2017.
# [Model Description](#contents)
## [Performance](#contents)
### Evaluation Performance
| Parameters | Ascend |
| -------------------------- | ----------------------------------------------------------- |
| Model Version | Inception V1 |
| Resource | Ascend 910 ;CPU 2.60GHz,56cores;Memory,314G |
| uploaded Date | 06/01/2020 (month/day/year) |
| MindSpore Version | 0.2.0-alpha |
| Dataset | COCO2017 |
| Training Parameters | epoch = 150, batch_size = 32, lr = 0.001 |
| Optimizer | Adam |
| Loss Function | Sigmoid Cross Entropy |
| outputs | probability |
| Speed | 1pc: 120 ms/step; 8pcs: 160 ms/step |
| Total time | 1pc: 150 mins; 8pcs: 70 mins |
| Parameters (M) | 189 |
| Scripts | [yolov3_resnet18 script](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/yolov3_resnet18) | [yolov3_resnet18 script](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/yolov3_resnet18) |
### Inference Performance
| Parameters | Ascend |
| ------------------- | ----------------------------------------------- |
| Model Version | Inception V1 |
| Resource | Ascend 910 |
| Uploaded Date | 06/01/2020 (month/day/year) |
| MindSpore Version | 0.2.0-alpha |
| Dataset | COCO2017 |
| batch_size | 1 |
| outputs | presion and recall |
| Accuracy | class 0: 88.18%/66.00%; class 1: 85.34%/79.13% |
# [Description of Random Situation](#contents)
In dataset.py, we set the seed inside “create_dataset" function. We also use random seed in train.py.
# [ModelZoo Homepage](#contents)
Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册