未验证 提交 1db8cc84 编写于 作者: J Jintao Lin 提交者: GitHub

Rename sth to mmaction2 (#10)

上级 ae69fbd9
......@@ -12,7 +12,7 @@ assignees: ''
There are several common situations in the reimplementation issues as below
1. Reimplement a model in the model zoo using the provided configs
2. Reimplement a model in the model zoo on other dataset (e.g., custom datasets)
3. Reimplement a custom model but all the components are implemented in MMAction
3. Reimplement a custom model but all the components are implemented in MMAction2
4. Reimplement a custom model with new modules implemented by yourself
There are several things to do for different cases as below.
......
......@@ -6,7 +6,7 @@ from mmaction.apis import inference_recognizer, init_recognizer
def parse_args():
parser = argparse.ArgumentParser(description='MMAction demo')
parser = argparse.ArgumentParser(description='MMAction2 demo')
parser.add_argument('config', help='test config file path')
parser.add_argument('checkpoint', help='checkpoint file')
parser.add_argument('video', help='video file')
......
......@@ -12,10 +12,10 @@ RUN apt-get update && apt-get install -y git ninja-build libglib2.0-0 libsm6 lib
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Install mmaction
# Install mmaction2
RUN conda clean --all
RUN git clone https://github.com/open-mmlab/mmaction2.git /mmaction
WORKDIR /mmaction
RUN git clone https://github.com/open-mmlab/mmaction2.git /mmaction2
WORKDIR /mmaction2
ENV FORCE_CUDA="1"
RUN pip install cython --no-cache-dir
RUN pip install --no-cache-dir -e .
......@@ -18,9 +18,9 @@ sys.path.insert(0, os.path.abspath('..'))
# -- Project information -----------------------------------------------------
project = 'MMAction'
project = 'MMAction2'
copyright = '2020, OpenMMLab'
author = 'MMAction Authors'
author = 'MMAction2 Authors'
# The full version, including alpha/beta/rc tags
with open('../mmaction/VERSION', 'r') as f:
......
......@@ -13,7 +13,7 @@
<!-- /TOC -->
We use python files as our config system. You can find all the provided configs under `$MMAction/configs`.
We use python files as our config system. You can find all the provided configs under `$MMAction2/configs`.
## Config File Naming Convention
......
......@@ -139,7 +139,7 @@ A notebook demo can be found in [demo/demo.ipynb](/demo/demo.ipynb)
### Build a model with basic components
In MMAction, model components are basically categorized as 4 types.
In MMAction2, model components are basically categorized as 4 types.
- recognizer: the whole recognizer model pipeline, usually contains a backbone and cls_head.
- backbone: usually an FCN network to extract feature maps, e.g., ResNet, BNInception.
......@@ -216,7 +216,7 @@ are good examples which show how to do that.
### Iteration pipeline
MMAction implements distributed training and non-distributed training,
MMAction2 implements distributed training and non-distributed training,
which uses `MMDistributedDataParallel` and `MMDataParallel` respectively.
We adopt distributed training for both single machine and multiple machines.
......@@ -278,7 +278,7 @@ Here is an example of using 8 GPUs to load TSN checkpoint.
### Train with multiple machines
If you can run MMAction on a cluster managed with [slurm](https://slurm.schedmd.com/), you can use the script `slurm_train.sh`. (This script also supports single machine training.)
If you can run MMAction2 on a cluster managed with [slurm](https://slurm.schedmd.com/), you can use the script `slurm_train.sh`. (This script also supports single machine training.)
```shell
[GPUS=${GPUS}] ./tools/slurm_train.sh ${PARTITION} ${JOB_NAME} ${CONFIG_FILE} ${WORK_DIR}
......
......@@ -51,7 +51,7 @@ def collect_env():
env_info['OpenCV'] = cv2.__version__
env_info['MMCV'] = mmcv.__version__
env_info['MMAction'] = mmaction.__version__
env_info['MMAction2'] = mmaction.__version__
return env_info
......
# These must be installed before building mmaction
# These must be installed before building mmaction2
numpy
torch>=1.3
......@@ -18,7 +18,7 @@ def test_collect_env():
target_keys = [
'sys.platform', 'Python', 'CUDA available', 'GCC', 'PyTorch',
'PyTorch compiling details', 'TorchVision', 'OpenCV', 'MMCV',
'MMAction'
'MMAction2'
]
cuda_available = torch.cuda.is_available()
if cuda_available:
......@@ -63,4 +63,4 @@ def test_collect_env():
assert env_info['OpenCV'] == cv2.__version__
assert env_info['MMCV'] == mmcv.__version__
assert env_info['MMAction'] == mmaction.__version__
assert env_info['MMAction2'] == mmaction.__version__
......@@ -27,7 +27,7 @@ def main():
# init logger before other steps
logger = get_root_logger()
logger.info(f'MMAction Version: {__version__}')
logger.info(f'MMAction2 Version: {__version__}')
logger.info(f'Config: {cfg.text}')
# create bench data list
......
......@@ -13,7 +13,7 @@ from mmaction.models import build_model
def parse_args():
parser = argparse.ArgumentParser(
description='MMAction benchmark a recognizer')
description='MMAction2 benchmark a recognizer')
parser.add_argument('config', help='test config file path')
parser.add_argument(
'--log-interval', default=10, help='interval of logging')
......
......@@ -32,7 +32,7 @@ you will get the features and annotation files.
In the context of the whole project (for ActivityNet only), the folder structure will look like:
```
mmaction
mmaction2
├── mmaction
├── tools
├── configs
......
......@@ -86,7 +86,7 @@ In the context of the whole project (for Kinetics-400 only), the *minimal* folde
(*minimal* means that some data are not necessary: for example, you may want to evaluate kinetics-400 using the original video format.)
```
mmaction
mmaction2
├── mmaction
├── tools
├── configs
......
......@@ -54,7 +54,7 @@ you will get the rawframes (RGB + Flow), videos and annotation files for Moments
In the context of the whole project (for Moments in Time only), the folder structure will look like:
```
mmaction
mmaction2
├── data
│   └── mit
│   ├── annotations
......
......@@ -53,7 +53,7 @@ you will get the rawframes (RGB + Flow), videos and annotation files for Multi-M
In the context of the whole project (for Multi-Moments in Time only), the folder structure will look like:
```
mmaction/
mmaction2/
└── data
└── mmit
├── annotations
......
......@@ -63,7 +63,7 @@ you will get the rawframes (RGB + Flow), videos and annotation files for Somethi
In the context of the whole project (for Something-Something V2 only), the folder structure will look like:
```
mmaction
mmaction2
├── mmaction
├── tools
├── configs
......
......@@ -68,7 +68,7 @@ you will get the rawframes (RGB + Flow), videos and annotation files for THUMOS'
In the context of the whole project (for THUMOS'14 only), the folder structure will look like:
```
mmaction
mmaction2
├── mmaction
├── tools
├── configs
......
......@@ -63,7 +63,7 @@ you will get the rawframes (RGB + Flow), videos and annotation files for UCF-101
In the context of the whole project (for UCF-101 only), the folder structure will look like:
```
mmaction
mmaction2
├── mmaction
├── tools
├── configs
......
......@@ -13,7 +13,7 @@ from mmaction.models import build_model
def parse_args():
parser = argparse.ArgumentParser(
description='MMAction test (and eval) a model')
description='MMAction2 test (and eval) a model')
parser.add_argument('config', help='test config file path')
parser.add_argument('checkpoint', help='checkpoint file')
parser.add_argument(
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册