FairMOT enhance DLA-34 used 8 GPUs for training and mini-batch size as 16 on each GPU,and trained for 60 epoches. The crowdhuman dataset is added to the train-set during training.
FairMOT enhance used 8 GPUs for training, and the crowdhuman dataset is added to the train-set during training. For FairMOT enhance DLA-34 the batch size is 16 on each GPU,and trained for 60 epoches. For FairMOT enhance HarDNet-85 the batch size is 10 on each GPU,and trained for 30 epoches.
@@ -23,14 +23,15 @@ MCFairMOT is the Multi-class extended version of [FairMOT](https://arxiv.org/abs
**Notes:**
MOTA is the average MOTA of 10 catecories in the VisDrone2019 MOT dataset, and its value is also equal to the average MOTA of all the evaluated video sequences.
MCFairMOT used 4 GPUs for training 30 epoches. The batch size is 6 on each GPU for MCFairMOT DLA-34, and 4 for MCFairMOT HRNetV2-W18.
## Getting Start
### 1. Training
Training MCFairMOT on 8 GPUs with following command
Training MCFairMOT on 4 GPUs with following command