-`GPU_NUM`: Number of GPU used to test model. If not specified, it will be set to 1.
-`RESULT_FILE`: Filename of the output results. If not specified, the results will not be saved to a file.
-`RESULT_FILE`: Filename of the output results. If not specified, the results will not be saved to a file.
-`EVAL_METRICS`: Items to be evaluated on the results. Allowed values depend on the dataset, e.g., `top_k_accuracy`, `mean_class_accuracy` are available for all datasets in recognition, `mean_average_precision` for Multi-Moments in Time, `AR@AN` for ActivityNet, etc.
-`EVAL_METRICS`: Items to be evaluated on the results. Allowed values depend on the dataset, e.g., `top_k_accuracy`, `mean_class_accuracy` are available for all datasets in recognition, `mean_average_precision` for Multi-Moments in Time, `AR@AN` for ActivityNet, etc.
-`NUM_PROC_PER_GPU`: Number of processes per GPU. If not specified, only one process will be assigned for a single gpu.
-`NUM_PROC_PER_GPU`: Number of processes per GPU. If not specified, only one process will be assigned for a single gpu.
You may well use the result for simple comparisons, but double check it before you adopt it in technical reports or papers.
You may well use the result for simple comparisons, but double check it before you adopt it in technical reports or papers.
(1) FLOPs are related to the input shape while parameters are not. The default input shape is (1, 3, 340, 256) for 2D recognizer, (1, 3, 32, 340, 256) for 3D recognizer.
(1) FLOPs are related to the input shape while parameters are not. The default input shape is (1, 3, 340, 256) for 2D recognizer, (1, 3, 32, 340, 256) for 3D recognizer.
(2) Some custom operators are not counted into FLOPs. You can add support for new operators by modifying [`mmaction/utils/flops_counter.py`](../mmaction/utils/file_client.py).
(2) Some custom operators are not counted into FLOPs. You can add support for new operators by modifying [`mmaction/utils/flops_counter.py`](/mmaction/utils/file_client.py).
### Publish a model
### Publish a model
...
@@ -540,9 +543,9 @@ There are two ways to work with custom datasets.
...
@@ -540,9 +543,9 @@ There are two ways to work with custom datasets.
- online conversion
- online conversion
You can write a new Dataset class inherited from [BaseDataset](../mmaction/datasets/base.py), and overwrite two methods
You can write a new Dataset class inherited from [BaseDataset](/mmaction/datasets/base.py), and overwrite two methods
`load_annotations(self)` and `evaluate(self, results, metrics, logger)`,
`load_annotations(self)` and `evaluate(self, results, metrics, logger)`,
like [RawframeDataset](../mmaction/datasets/rawframe_dataset.py), [VideoDataset](../mmaction/datasets/video_dataset.py) or [ActivityNetDataset](../mmaction/datasets/activitynet_dataset.py).
like [RawframeDataset](/mmaction/datasets/rawframe_dataset.py), [VideoDataset](/mmaction/datasets/video_dataset.py) or [ActivityNetDataset](/mmaction/datasets/activitynet_dataset.py).
- offline conversion
- offline conversion
...
@@ -551,7 +554,7 @@ There are two ways to work with custom datasets.
...
@@ -551,7 +554,7 @@ There are two ways to work with custom datasets.
### Customize optimizer
### Customize optimizer
An example of customized optimizer is [CopyOfSGD](../mmaction/core/optimizer/copy_of_sgd.py).
An example of customized optimizer is [CopyOfSGD](/mmaction/core/optimizer/copy_of_sgd.py).
More generally, a customized optimizer could be defined as following.
More generally, a customized optimizer could be defined as following.
In `mmaction/core/optimizer/my_optimizer.py`:
In `mmaction/core/optimizer/my_optimizer.py`:
...
@@ -574,10 +577,10 @@ from .my_optimizer import MyOptimizer
...
@@ -574,10 +577,10 @@ from .my_optimizer import MyOptimizer
Then you can use `MyOptimizer` in `optimizer` field of config files.
Then you can use `MyOptimizer` in `optimizer` field of config files.
Especially, If you want to construct a optimizer based on a specified model and param-wise config,
Especially, If you want to construct a optimizer based on a specified model and param-wise config,
You can write a new optimizer constructor inherit from [DefaultOptimizerConstructor](../mmaction/core/optimizer/default_constructor.py)
You can write a new optimizer constructor inherit from [DefaultOptimizerConstructor](https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/optimizer/default_constructor.py)
and overwrite the `add_params(self, params, module)` method.
and overwrite the `add_params(self, params, module)` method.
An example of customized optimizer constructor is [TSMOptimizerConstructor](../mmaction/core/optimizer/tsm_optimizer_constructor.py).
An example of customized optimizer constructor is [TSMOptimizerConstructor](/mmaction/core/optimizer/tsm_optimizer_constructor.py).
More generally, a customized optimizer constructor could be defined as following.
More generally, a customized optimizer constructor could be defined as following.
In `mmaction/core/optimizer/my_optimizer_constructor.py`:
In `mmaction/core/optimizer/my_optimizer_constructor.py`:
...
@@ -638,7 +641,7 @@ Here we show how to develop new components with an example of TSN.
...
@@ -638,7 +641,7 @@ Here we show how to develop new components with an example of TSN.
3. Create a new file `mmaction/models/heads/tsn_head.py`.
3. Create a new file `mmaction/models/heads/tsn_head.py`.
You can write a new classification head inherit from [BaseHead](../mmaction/models/heads/base.py),
You can write a new classification head inherit from [BaseHead](/mmaction/models/heads/base.py),
and overwrite `init_weights(self)` and `forward(self, x)` method.
and overwrite `init_weights(self)` and `forward(self, x)` method.
```python
```python
...
@@ -684,5 +687,5 @@ Here we show how to develop new components with an example of TSN.
...
@@ -684,5 +687,5 @@ Here we show how to develop new components with an example of TSN.
## Tutorials
## Tutorials
Currently, we provide some tutorials for users to [finetune model](tutorials/finetune.md),
Currently, we provide some tutorials for users to [finetune model](/docs/tutorials/finetune.md),
[add new dataset](tutorials/new_dataset.md), [add new modules](tutorials/new_modules.md).
[add new dataset](/docs/tutorials/new_dataset.md), [add new modules](/docs/tutorials/new_modules.md).
This tutorial provides instructions for users to use the pre-trained models (see [Model zoo](../model_zoo.md))
This tutorial provides instructions for users to use the pre-trained models
to finetune them on other datasets, so that better performance can be get.
to finetune them on other datasets, so that better performance can be get.
There are two steps to finetune a model on a new dataset.
There are two steps to finetune a model on a new dataset.
1. Add support for the new dataset. See [Tutorial 2: Adding New Dataset](new_dataset.md).
1. Add support for the new dataset. See [Tutorial 2: Adding New Dataset](/docs/tutorials/new_dataset.md).
1. Modify the configs. This will be discussed in this tutorial.
1. Modify the configs. This will be discussed in this tutorial.
For example, if the user want to finetune models pre-trained on Kinetics-400 Dataset to another dataset, say UCF101,
For example, if the user want to finetune models pre-trained on Kinetics-400 Dataset to another dataset, say UCF101,
...
@@ -80,5 +80,5 @@ To use the pre-trained model for the whole network, the new config adds the link
...
@@ -80,5 +80,5 @@ To use the pre-trained model for the whole network, the new config adds the link
```python
```python
# use the pre-trained model for the whole TSN network
# use the pre-trained model for the whole TSN network
load_from='https://s3.ap-northeast-2.amazonaws.com/open-mmlab/mmaction-lite/models/tsn_r50_1x1x3_100e_kinetics400_rgb_xxx.pth'# model path can be found in model zoo
load_from='https://open-mmlab.s3.ap-northeast-2.amazonaws.com/mmaction/mmaction-v1/recognition/tsn_r50_1x1x3_100e_kinetics400_rgb/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth'# model path can be found in model zoo
@@ -84,9 +84,9 @@ There are two ways to work with custom datasets.
...
@@ -84,9 +84,9 @@ There are two ways to work with custom datasets.
- online conversion
- online conversion
You can write a new Dataset class inherited from [BaseDataset](../mmaction/datasets/base.py), and overwrite three methods
You can write a new Dataset class inherited from [BaseDataset](/mmaction/datasets/base.py), and overwrite three methods
`load_annotations(self)`, `evaluate(self, results, metrics, logger)` and `dump_results(self, results, out)`,
`load_annotations(self)`, `evaluate(self, results, metrics, logger)` and `dump_results(self, results, out)`,
like [RawframeDataset](../mmaction/datasets/rawframe_dataset.py), [VideoDataset](../mmaction/datasets/video_dataset.py) or [ActivityNetDataset](../mmaction/datasets/activitynet_dataset.py).
like [RawframeDataset](/mmaction/datasets/rawframe_dataset.py), [VideoDataset](/mmaction/datasets/video_dataset.py) or [ActivityNetDataset](/mmaction/datasets/activitynet_dataset.py).
- offline conversion
- offline conversion
...
@@ -197,7 +197,7 @@ dataset_A_train = dict(
...
@@ -197,7 +197,7 @@ dataset_A_train = dict(
## Customize Dataset by Mixing Dataset
## Customize Dataset by Mixing Dataset
MMAction also supports to mix dataset for training. Currently it supports to concat and repeat dataset.
MMAction also supports to mix dataset for training. Currently it supports to repeat dataset.
An example of customized optimizer is [CopyOfSGD](../mmaction/core/optimizer/copy_of_sgd.py) is defined in `mmaction/core/optimizer/copy_of_sgd.py`.
An example of customized optimizer is [CopyOfSGD](/mmaction/core/optimizer/copy_of_sgd.py) is defined in `mmaction/core/optimizer/copy_of_sgd.py`.
More generally, a customized optimizer could be defined as following.
More generally, a customized optimizer could be defined as following.
Assume you want to add an optimizer named as `MyOptimizer`, which has arguments `a`, `b` and `c`.
Assume you want to add an optimizer named as `MyOptimizer`, which has arguments `a`, `b` and `c`.
...
@@ -51,10 +51,10 @@ The users can directly set arguments following the [API doc](https://pytorch.org
...
@@ -51,10 +51,10 @@ The users can directly set arguments following the [API doc](https://pytorch.org
Some models may have some parameter-specific settings for optimization, e.g. weight decay for BatchNorm layers.
Some models may have some parameter-specific settings for optimization, e.g. weight decay for BatchNorm layers.
The users can do those fine-grained parameter tuning through customizing optimizer constructor.
The users can do those fine-grained parameter tuning through customizing optimizer constructor.
You can write a new optimizer constructor inherit from [DefaultOptimizerConstructor](../mmaction/core/optimizer/default_constructor.py)
You can write a new optimizer constructor inherit from [DefaultOptimizerConstructor](https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/optimizer/default_constructor.py)
and overwrite the `add_params(self, params, module)` method.
and overwrite the `add_params(self, params, module)` method.
An example of customized optimizer constructor is [TSMOptimizerConstructor](../mmaction/core/optimizer/tsm_optimizer_constructor.py).
An example of customized optimizer constructor is [TSMOptimizerConstructor](/mmaction/core/optimizer/tsm_optimizer_constructor.py).
More generally, a customized optimizer constructor could be defined as following.
More generally, a customized optimizer constructor could be defined as following.
In `mmaction/core/optimizer/my_optimizer_constructor.py`:
In `mmaction/core/optimizer/my_optimizer_constructor.py`:
...
@@ -144,7 +144,7 @@ Here we show how to develop a new head with the example of TSNHead as the follow
...
@@ -144,7 +144,7 @@ Here we show how to develop a new head with the example of TSNHead as the follow
1. Create a new file `mmaction/models/heads/tsn_head.py`.
1. Create a new file `mmaction/models/heads/tsn_head.py`.
You can write a new classification head inheriting from [BaseHead](../mmaction/models/heads/base.py),
You can write a new classification head inheriting from [BaseHead](/mmaction/models/heads/base.py),
and overwrite `init_weights(self)` and `forward(self, x)` method.
and overwrite `init_weights(self)` and `forward(self, x)` method.