MMAction is an open-source toolbox for action understanding based on PyTorch.
MMAction2 is an open-source toolbox for action understanding based on PyTorch.
It is a part of the [OpenMMLab project](https://github.com/open-mmlab) developed by [Multimedia Laboratory, CUHK](http://mmlab.ie.cuhk.edu.hk/).
It is a part of the [OpenMMLab project](https://github.com/open-mmlab) developed by [Multimedia Laboratory, CUHK](http://mmlab.ie.cuhk.edu.hk/).
<divalign="center">
<divalign="center">
...
@@ -35,7 +35,7 @@ It is a part of the [OpenMMLab project](https://github.com/open-mmlab) developed
...
@@ -35,7 +35,7 @@ It is a part of the [OpenMMLab project](https://github.com/open-mmlab) developed
-**Support for multiple action understanding frameworks**
-**Support for multiple action understanding frameworks**
MMAction implements popular frameworks for action understanding:
MMAction2 implements popular frameworks for action understanding:
- For action recognition, various algorithms are implemented, including TSN, TSM, R(2+1)D, I3D, SlowOnly, SlowFast.
- For action recognition, various algorithms are implemented, including TSN, TSM, R(2+1)D, I3D, SlowOnly, SlowFast.
...
@@ -79,10 +79,10 @@ There are also tutorials for [finetuning models](docs/tutorials/finetune.md), [a
...
@@ -79,10 +79,10 @@ There are also tutorials for [finetuning models](docs/tutorials/finetune.md), [a
## Contributing
## Contributing
We appreciate all contributions to improve MMAction. Please refer to [CONTRIBUTING.md](.github/CONTRIBUTING.md) for the contributing guideline.
We appreciate all contributions to improve MMAction2. Please refer to [CONTRIBUTING.md](.github/CONTRIBUTING.md) for the contributing guideline.
## Acknowledgement
## Acknowledgement
MMAction is an open source project that is contributed by researchers and engineers from various colleges and companies.
MMAction2 is an open source project that is contributed by researchers and engineers from various colleges and companies.
We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks.
We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks.
We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new models.
We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new models.
@@ -4,7 +4,7 @@ We compare our results with some popular frameworks and official releases in ter
...
@@ -4,7 +4,7 @@ We compare our results with some popular frameworks and official releases in ter
## Comparision Rules
## Comparision Rules
Here we compare our MMAction repo with other video understanding toolboxes in the same data and model settings
Here we compare our MMAction2 repo with other video understanding toolboxes in the same data and model settings
by the training time per iteration. Here, we use
by the training time per iteration. Here, we use
- commit id [7f3490d](https://github.com/open-mmlab/mmaction/tree/7f3490d3db6a67fe7b87bfef238b757403b670e3)(1/5/2020) of MMAction V0.1
- commit id [7f3490d](https://github.com/open-mmlab/mmaction/tree/7f3490d3db6a67fe7b87bfef238b757403b670e3)(1/5/2020) of MMAction V0.1
- commit id [8d53d6f](https://github.com/mit-han-lab/temporal-shift-module/tree/8d53d6fda40bea2f1b37a6095279c4b454d672bd)(5/5/2020) of Temporal-Shift-Module
- commit id [8d53d6f](https://github.com/mit-han-lab/temporal-shift-module/tree/8d53d6fda40bea2f1b37a6095279c4b454d672bd)(5/5/2020) of Temporal-Shift-Module
...
@@ -21,7 +21,7 @@ The training speed is measure with s/iter. The lower, the better.
...
@@ -21,7 +21,7 @@ The training speed is measure with s/iter. The lower, the better.
MMAction supports two types of data format: raw frames and video. The former is widely used in previous projects such as [TSN](https://github.com/yjxiong/temporal-segment-networks).
MMAction2 supports two types of data format: raw frames and video. The former is widely used in previous projects such as [TSN](https://github.com/yjxiong/temporal-segment-networks).
This is fast when SSD is available but fails to scale to the fast-growing datasets.
This is fast when SSD is available but fails to scale to the fast-growing datasets.
(For example, the newest edition of [Kinetics](https://deepmind.com/research/open-source/open-source-datasets/kinetics/) has 650K videos and the total frames will take up several TBs.)
(For example, the newest edition of [Kinetics](https://deepmind.com/research/open-source/open-source-datasets/kinetics/) has 650K videos and the total frames will take up several TBs.)
The latter saves much space but has to do the computation intensive video decoding at execution time
The latter saves much space but has to do the computation intensive video decoding at execution time
1. The git commit id will be written to the version number with step d, e.g. 0.6.0+2e7045c. The version will also be saved in trained models.
1. The git commit id will be written to the version number with step d, e.g. 0.6.0+2e7045c. The version will also be saved in trained models.
It is recommended that you run step d each time you pull some updates from github. If C++/CUDA codes are modified, then this step is compulsory.
It is recommended that you run step d each time you pull some updates from github. If C++/CUDA codes are modified, then this step is compulsory.
2. Following the above instructions, mmaction is installed on `dev` mode, any local modifications made to the code will take effect without the need to reinstall it (unless you submit some commits and want to update the version number).
2. Following the above instructions, mmaction2 is installed on `dev` mode, any local modifications made to the code will take effect without the need to reinstall it (unless you submit some commits and want to update the version number).
3. If you would like to use `opencv-python-headless` instead of `opencv-python`,
3. If you would like to use `opencv-python-headless` instead of `opencv-python`,
you can install it before installing MMCV.
you can install it before installing MMCV.
...
@@ -117,7 +117,7 @@ docker run --gpus all --shm-size=8g -it -v {DATA_DIR}:/mmaction/data mmaction
...
@@ -117,7 +117,7 @@ docker run --gpus all --shm-size=8g -it -v {DATA_DIR}:/mmaction/data mmaction
### A from-scratch setup script
### A from-scratch setup script
Here is a full script for setting up mmaction with conda and link the dataset path (supposing that your Kinetics-400 dataset path is $KINETICS400_ROOT).
Here is a full script for setting up mmaction2 with conda and link the dataset path (supposing that your Kinetics-400 dataset path is $KINETICS400_ROOT).
```shell
```shell
conda create -n open-mmlab python=3.7 -y
conda create -n open-mmlab python=3.7 -y
...
@@ -133,11 +133,11 @@ mkdir data
...
@@ -133,11 +133,11 @@ mkdir data
ln-s$KINETICS400_ROOT data
ln-s$KINETICS400_ROOT data
```
```
### Using multiple MMAction versions
### Using multiple MMAction2 versions
The train and test scripts already modify the `PYTHONPATH` to ensure the script use the MMAction in the current directory.
The train and test scripts already modify the `PYTHONPATH` to ensure the script use the MMAction2 in the current directory.
To use the default MMAction installed in the environment rather than that you are working with, you can remove the following line in those scripts.
To use the default MMAction2 installed in the environment rather than that you are working with, you can remove the following line in those scripts.